Hi,
Can you please attach full engine and vdsm logs?
On Thu, Jul 13, 2017 at 1:07 AM, Devin Acosta wrote:
> We are running a fresh install of oVIRT 4.1.3, using ISCSI, the VM in
> question has multiple Disks (4 to be exact). It snapshotted OK while on
> iSCSI however when I went to delete the si
[Adding ovirt-users]
On Sun, Jul 16, 2017 at 12:58 PM, Benny Zlotnik wrote:
> We can see a lot of related errors in the engine log but we are unable
> to correlate to the vdsm log. Do you have more hosts? If yes, please
> attach their logs as well.
> And just to be sure you were a
Hi,
Can please you provide the versions of vdsm, qemu, libvirt?
On Sun, Jul 30, 2017 at 1:01 PM, Johan Bernhardsson wrote:
> Hello,
>
> We get this error message while moving or copying some of the disks on
> our main cluster running 4.1.2 on centos7
>
> This is shown in the engine:
> VDSM vbgk
sion:
> glusterfs-3.8.11-1.el7
> CEPH Version:
> librbd1-0.94.5-1.el7
>
> qemu-img version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.9.1), Copyright (c)
> 2004-2008 Fabrice Bellard
>
> This is what i have on the hosts.
>
> /Johan
>
> On Sun, 2017-07-30 at 13:56 +0300, Benny Zlot
Hi,
Look at [1], however there are caveats so be sure to pay close attention to
the warning section.
[1] - https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/localdisk/README
On Tue, Sep 5, 2017 at 4:52 PM, Benny Zlotnik wrote:
> Hi,
>
> Look at [1], however there are caveats so b
Accidentally replied without cc-ing the list
On Sun, Sep 3, 2017 at 12:21 PM, Benny Zlotnik wrote:
> Hi,
>
> Could you provide full engine and vdsm logs?
>
> On Sat, Sep 2, 2017 at 4:23 PM, wai chun hung
> wrote:
>
>> Dear all,
>> This is my first time to ask
Hi Terry,
The disk in the snapshot appears to be in an illegal state. How long has it
been like this? Do you have logs from when it happened?
On Tue, Sep 5, 2017 at 8:52 PM, Terry hey wrote:
> Dear all,
> Thank you for your time to read this post first.
> In the same host, there are four virtua
This was fixed in 4.3.6, I suggest upgrading
On Tue, Nov 12, 2019 at 12:45 PM wrote:
>
> Hi,
>
> I'm running ovirt Version:4.3.4.3-1.el7
> My filesystem disk has 30 GB free space.
> Cannot start a VM due to an I/O error storage.
> When tryng to move the disk to another storage domain get this err
The current plan to integrate ceph is via cinderlib integration[1]
(currently in tech preview mode) because we still have no packaging
ready, there are some manual installation steps required, but there is
no need to install and configure openstack/cinder
>1. Does this require you to install Open
Works fine for me, anything interesting in the browser console?
On Sat, Nov 23, 2019 at 7:04 PM Strahil Nikolov wrote:
>
> Hello Community,
>
> I have a constantly loading chrome on my openSuSE 15.1 (and my android
> phone), while firefox has no issues .
> Can someone test accessing the oVirt Ad
> We are using Ceph with oVirt (via standalone Cinder) extensively in a
> production environment.
> I tested oVirt cinderlib integration in our dev environment, gave some
> feedback here on the list and am currently waiting for the future
> development. IMHO cinderlib in oVirt is currently not fit
Please attach engine and vdsm logs and specify the versions
On Mon, Dec 23, 2019 at 10:08 AM Vijay Sachdeva
wrote:
>
> Hi All,
>
>
>
> I am trying to import a VM from export domain, but import fails.
>
>
>
> Setup:
>
>
>
> Source DC has a NFS shared storage with two Hosts
> Destination DC has a l
One host has to connect and setup the storage (mount the path, create
the files, etc) so you are given the choice which host to use for this
On Mon, Dec 30, 2019 at 11:07 AM wrote:
>
> hello and happy new year~
>
> I am wondering the role of "use host" field in storage domain creation.
>
> https:
Did you change the volume metadata to LEGAL on the storage as well?
On Thu, Jan 9, 2020 at 2:19 PM David Johnson
wrote:
> We had a drive in our NAS fail, but afterwards one of our VM's will not
> start.
>
> The boot drive on the VM is (so near as I can tell) the only drive
> affected.
>
> I con
you can attach the storage domain to another engine and import it
On Mon, Feb 3, 2020 at 11:45 PM matteo fedeli wrote:
>
> Hi, It's possibile recover a VM if the engine is damaged? the vm is on a data
> storage domain.
> ___
> Users mailing list -- use
Is the VM running? Can you remove it when the VM is down?
Can you find the reason for illegal status in the logs?
On Tue, Feb 4, 2020 at 5:06 PM Crazy Ayansh
wrote:
> Hey Guys,
>
> Any help on it ?
>
> Thanks
>
> On Tue, Feb 4, 2020 at 4:04 PM Crazy Ayansh
> wrote:
>
>>
>> Hi Team,
>>
>> I am
Please help.
>
> Thanks
> Shashank
>
>
>
> On Tue, Feb 4, 2020 at 8:54 PM Benny Zlotnik wrote:
>
>> Is the VM running? Can you remove it when the VM is down?
>> Can you find the reason for illegal status in the logs?
>>
>> On Tue, Feb 4, 2020 at
you need to go to the "import vm" tab on the storage domain and import them
On Tue, Feb 4, 2020 at 7:30 PM matteo fedeli wrote:
>
> it does automatically when I attach or should I execute particular operations?
> ___
> Users mailing list -- users@ovirt.
tatus of the chain on vdsm
As well as `virsh -r dumpxml ind-co-ora-ee-02` (assuming ind-co-ora-ee-02
is the VM with the issue)
Changing the snapshot status with unlock_entity will likely work only if
the chain is fine on the storage
On Tue, Feb 4, 2020 at 7:40 PM Crazy Ayansh
wrote:
> pleas
anything in the vdsm or engine logs?
On Sun, Feb 23, 2020 at 4:23 PM Robert Webb wrote:
>
> Also, I did do the “Login” to connect to the target without issue, from what
> I can tell.
>
>
>
> From: Robert Webb
> Sent: Sunday, February 23, 2020 9:06 AM
> To: users@ovirt.org
> Subject: iSCSI Domain
we use the stats API in the engine, currently only to check if the
backend is accessible, we have plans to use it for monitoring and
validations but it is not implemented yet
On Mon, Feb 24, 2020 at 3:35 PM Nir Soffer wrote:
>
> On Mon, Feb 24, 2020 at 3:03 PM Gorka Eguileor wrote:
> >
> > On 22
it hasn't disappeared, there has been work done to move operations
that used to run only on SPM to run on regular hosts as well
(copy/move disk)
Currently the main operations performed by SPM are
create/delete/extend volume and more[1]
[1]
https://github.com/oVirt/ovirt-engine/tree/master/backen
anything in the logs (engine,vdsm)?
if there's nothing on the storage, removing from the database should
be safe, but it's best to check why it failed
On Mon, Apr 20, 2020 at 5:39 PM Strahil Nikolov wrote:
>
> Hello All,
>
> did anyone observe the following behaviour:
>
> 1. Create a new disk fro
> 1. The engine didn't clean it up itself - after all , no mater the reason,
> the operation has failed?
can't really answer without looking at the logs, engine should cleanup
in case of a failure, there can be numerous reasons for cleanup to
fail (connectivity issues, bug, etc)
> 2. Why the query
Live merge (snapshot removal) is running on the host where the VM is
running, you can look for the job id
(f694590a-1577-4dce-bf0c-3a8d74adf341) on the relevant host
On Wed, May 27, 2020 at 9:02 AM David Sekne wrote:
>
> Hello,
>
> I'm running oVirt version 4.3.9.4-1.el7.
>
> After a failed live
You can't see it because it is not a task, tasks only run on SPM, It
is a VM job and the data about it is stored in the VM's XML, it's also
stored in the vm_jobs table.
You can see the status of the job in libvirt with `virsh blockjob
sda --info` (if it's still running)
On Wed, May 27, 2020 at
Can you share the VM's xml?
Can be obtained with `virsh -r dumpxml `
Is the VM overloaded? I suspect it has trouble converging
taskcleaner only cleans up the database, I don't think it will help here
___
Users mailing list -- users@ovirt.org
To unsubscri
1 Tb disk) yet not overloaded. We
> have multiple servers with the same specs with no issues.
>
> Regards,
>
> On Wed, May 27, 2020 at 2:28 PM Benny Zlotnik wrote:
>>
>> Can you share the VM's xml?
>> Can be obtained with `virsh -r dumpxml `
>> Is the VM ove
gt; Best Regards,
>> Strahil Nikolov
>>
>> На 27 май 2020 г. 17:39:36 GMT+03:00, Benny Zlotnik
>> написа:
>> >Sorry, by overloaded I meant in terms of I/O, because this is an
>> >active layer merge, the active layer
>> >(aabf3788-8e47-4f8b-84ad-a
I've used successfully rocky with 4.3 in the past, the main caveat
with 4.3 currently is that cinderlib has to be forced to be 0.9.0 (pip
install cinderlib==0.9.0).
Let me know if you have any issues.
Hopefully during 4.4 we will have the repositories with the RPMs and
installation will be much ea
yes, it looks like a configuration issue, you can use plain `rbd` to
check connectivity.
regarding starting vms and live migration, are there bug reports for these?
there is an issue we're aware of with live migration[1], it can be
worked around by blacklisting rbd devices in the multipath.conf
[1
yes, that's because cinderlib uses KRBD, so it has less features, I
should add this to the documentation.
I was told cinderlib has plans to add support for rbd-nbd, this would
eventually allow use of newer features
On Mon, Jun 8, 2020 at 9:40 PM Mathias Schwenke
wrote:
>
> > It looks like a confi
looks like https://bugzilla.redhat.com/show_bug.cgi?id=1785939
On Mon, Jun 15, 2020 at 2:37 PM Yedidyah Bar David wrote:
>
> On Mon, Jun 15, 2020 at 2:13 PM minnie...@vinchin.com
> wrote:
> >
> > Hi,
> >
> > I tried to send the log to you by email, but it fails. So I have sent them
> > to Googl
Can you please provide full vdsm logs (only the engine log is attached) and
the versions of the engine, vdsm, gluster?
On Tue, Nov 14, 2017 at 6:16 PM, Bryan Sockel wrote:
> Having an issue moving a hard disk from one vm data store new a newly
> created gluster data store. I can shut down the m
Hi,
This looks like a bug. Can you please file a report with the steps and full
logs on https://bugzilla.redhat.com?
>From looking at the logs it looks like its related to the user field being
empty
On Wed, Nov 15, 2017 at 1:40 PM, wrote:
> Hi,
>
> I'm trying to connect a new oVirt Engine Versi
Hi Tibor,
Can you please explain this part: "After this I just wondered, I will make
a new VM with same disk and I will copy the images (really just rename)
from original to recreated."
What were the exact steps you took?
Thanks
On Thu, Nov 16, 2017 at 4:19 PM, Demeter Tibor wrote:
> Hi,
>
> T
Hi,
Please attach full engine and vdsm logs
On Sun, Nov 19, 2017 at 12:26 PM, Алексей Максимов <
aleksey.i.maksi...@yandex.ru> wrote:
>
> Hello, oVirt guru`s!
>
> oVirt Engine Version: 4.1.6.2-1.el7.centos
>
> Some time ago the problems started with the oVirt administrative web
> console.
> When
+ ovirt-users
On Sun, Nov 19, 2017 at 5:40 PM, Benny Zlotnik wrote:
> Hi,
>
> There are a couple of issues here, can you please open a bug so we can
> track this properly? https://bugzilla.redhat.com/
> and attach all relevant logs
>
> I went over the logs, are you sure
c90-e574-4282-b1ee-779602e35f24/
> master/vms/f4429fa5-76a2-45a7-ae3e-4d8955d4f1a6
>
> total 16
> drwxr-xr-x. 2 vdsm kvm 4096 Nov 9 02:32 .
> drwxr-xr-x. 106 vdsm kvm 12288 Nov 9 02:32 ..
>
> I can just remove this directory?
>
> 19.11.2017, 18:51, "Benny Zlotnik&quo
Please attach engine and vdsm logs
On Tue, Nov 21, 2017 at 2:11 PM, Arthur Melo wrote:
> Can someone help me with this error?
>
>
> Failed to delete snapshot '' for VM 'proxy03'.
>
>
>
> Atenciosamente,
> Arthur Melo
> Linux User #302250
>
>
> ___
> Us
ks.CommandCallbacksPoller]
> (DefaultQuartzScheduler10) [70cc2ffa-2414-4a00-9e24-6b6378408a9d] Failed
> invoking callback end method 'onFailed' for command
> 'a84519fe-6b23-4084-84a2-b7964cbcde26' with exception 'null', the
> callback is marked for end me
en a bug and attach my logs?
>
> 20.11.2017, 13:08, "Benny Zlotnik" :
>
> Yes, you can remove it
>
> On Mon, Nov 20, 2017 at 8:10 AM, Алексей Максимов <
> aleksey.i.maksi...@yandex.ru> wrote:
>
> I found an empty directory in the Export domain storage:
&g
Regarding the first question: there is a bug open for this issue [1]
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1513987
On Fri, Dec 22, 2017 at 1:42 PM, Nathanaël Blanchet
wrote:
> Hi all,
>
> On 4.2, it seems that it is not possible anymore to move a disk to an
> other storage domain th
Can you please provide the log with the error?
On Sat, Jan 6, 2018 at 5:09 PM, carl langlois
wrote:
> Hi again,
>
> I manage to go a little bit further.. I was not able to set one host to
> maintenance because they had running vm.. so i force it to mark it as
> reboot and flush any vm and now i
Can you please attach engine and vdsm logs?
On 12 Jan 2018 11:43, "Tomeu Sastre Cabanellas" wrote:
> hi there,
>
> i'm testing ovirt 4.2 because I want to migrate all our VMs from
> XenServer, I have set a engine and a node, when conecting to the node I
> receive a "non-operational" and I cannot
Hi,
Can you please attach engine and vdsm logs?
On Tue, Jan 23, 2018 at 1:55 PM, Chris Boot wrote:
> Hi all,
>
> I'm running oVirt 4.2.0 and have been using oVirtBackup with it. So far
> it has been working fine, until this morning. Once of my VMs seems to
> have had a snapshot created that I c
Hi,
By default there two OVF_STORE disks per domain. It can be changed with the
StorageDomainOvfStoreCount config value.
On Wed, Jan 24, 2018 at 1:58 PM, Stefano Danzi wrote:
> Hello,
>
> I'm checking Storage -> Disks in my oVirt test site. I can find:
>
> - 4 disks for my 4 VM
> - 1 disk for H
It was replaced by vdsm-client[1]
[1] - https://www.ovirt.org/develop/developer-guide/vdsm/vdsm-client/
On Tue, Feb 6, 2018 at 10:17 AM, Alex K wrote:
> Hi all,
>
> I have a stuck snapshot removal from a VM which is blocking the VM to
> start.
> In ovirt 4.1 I was able to cancel the stuck task
Under the 3 dots as can be seen in the attached screenshot
On Thu, Feb 15, 2018 at 7:07 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:
>
>
> > On 15 Feb 2018, at 14:17, Andrei V wrote:
> >
> > Hi !
> >
> >
> > I can’t locate “Sparsify” disk image command anywhere in oVirt 4.2.
> > Wh
Hi Bryan,
You can go into the template -> storage tab -> select the disk and remove
it there
On Fri, Mar 30, 2018 at 4:50 PM, Bryan Sockel
wrote:
> Hi,
>
>
> We are in the process of re-doing one of our storage domains. As part of
> the process I needed to relocate my templates over to a tempo
You can do that using something like:
snapshot_service = snapshots_service.snapshot_service(snapshot.id)
snapshot = snapshot_service.get()
if snapshot.snapshot_status == types.SnapshotStatus.OK:
...
But counting on the snapshot status is race prone, so in 4.2 a se
Can you provide the full engine and vdsm logs?
On Mon, 9 Apr 2018, 22:08 Scott Walker, wrote:
> Log file error is:
>
> 2018-04-09 15:05:09,576-04 WARN [org.ovirt.engine.core.bll.RunVmCommand]
> (default task-28) [5f605594-423e-43f6-9e42-e47453518701] Validation of
> action 'RunVm' failed for us
Is the storage domain marked as backup?
If it is, you cannot use its disks in an active VM. You can remove the flag
and try again
On Mon, Apr 9, 2018 at 10:52 PM, Scott Walker
wrote:
> All relevant log files.
>
> On 9 April 2018 at 15:21, Benny Zlotnik wrote:
>
>> Can yo
Can you attach engine and vdsm logs?
Also, which version are you using?
On Wed, 18 Apr 2018, 19:23 , wrote:
> Hello All,
>
> after an update and a reboot, 3 vm's are indicated as diskless.
> When I try to add disks I indeed see 3 available disks, but I also see that
> all 3 are indicated to be
Looks like you hit this: https://bugzilla.redhat.com/show_bug.cgi?id=1569420
On Thu, Apr 19, 2018 at 3:25 PM, Roger Meier
wrote:
> Hi all,
>
> I wanted to add a new host to our current oVirt 4.2.2 setup and the
> install of the host fail with the following error message:
>
> /var/log/ovirt-engin
It is in the disk_image_dynamic table
On Thu, Apr 19, 2018 at 3:36 PM, Hari Prasanth Loganathan <
hariprasant...@msystechnologies.com> wrote:
> Hi Team,
>
> I am trying to get the disk level statistics using oVirt with the
> following API,
>
> /ovirt-engine/api/disks/{unique_disk_id}/statistics/
Looks like a bug. Can you please file a report:
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine
On Mon, Apr 23, 2018 at 9:38 PM, ~Stack~ wrote:
> Greetings,
>
> After my rebuild, I have imported my VM's. Everything went smooth and
> all of them came back, except one. One VM gives
Can you provide the logs? engine and vdsm.
Did you perform a live migration (the VM is running) or cold?
On Fri, May 11, 2018 at 2:49 PM, Juan Pablo
wrote:
> Hi! , Im strugled about an ongoing problem:
> after migrating a vm's disk from an iscsi domain to a nfs and ovirt
> reporting the migrati
T-03:00 Juan Pablo :
>
>> hi,
>> Alias:
>> mail02-int_Disk1
>> Description:
>> ID:
>> 65ec515e-0aae-4fe6-a561-387929c7fb4d
>> Alignment:
>> Unknown
>> Disk Profile:
>> Wipe After Delete:
>> No
>>
>> that one
>>
t;
> thanks in advance
>
> 2018-05-11 12:50 GMT-03:00 Benny Zlotnik :
>
>> I see here a failed attempt:
>> 2018-05-09 16:00:20,129-03 ERROR [org.ovirt.engine.core.dal.dbb
>> roker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engineScheduled-T
I believe you've hit this bug: https://bugzilla.redhat.c
om/show_bug.cgi?id=1565040
You can try to release the lease manually using the sanlock client command
(there's an example in the comments on the bug),
once the lease is free the job will fail and the disk can be unlock
On Thu, May 17, 2018
ata-center/mnt/10.35.0.233:
_root_storage__domains_sd1/5c4d2216-2eb3-4e24-b254-d5f83fde4dbe/images/b8eb8c82-fddd-4fbc-b80d-6ee04c1255bc/7190913d-320c-4fc9-a5b3-c55b26aa30f4.lease
Then you can look in /var/log/sanlock.log
2018-05-17 11:35:18 243132 [14847]: s2:r9 resource
5c4d2216-2eb3-4
based storage FWIW (both source and
> destination of the movement).
>
> Thanks.
>
> El 2018-05-17 10:01, Benny Zlotnik escribió:
> > In the vdsm log you will find the volumeInfo log which looks like
> > this:
> >
> > 2018-05-17 11:55:03,257+0300 DEBUG (jsonrpc/6) [
By the way, please verify it's the same issue, you should see "the volume
lease is not FREE - the job is running" in the engine log
On Thu, May 17, 2018 at 1:21 PM, Benny Zlotnik wrote:
> I see because I am on debug level, you need to enable it in order to see
>
> https
27;, 'generation': 1, 'image':
> 'b4013aba-a936-4a54-bb14-670d3a8b7c38',
> 'ctime': '1526470759', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0',
> 'apparentsize': '1073
021210 0001 0005 6
And then I release it:
$ sanlock client release -r
3e541b2d-2a49-4eb8-ae4b-aa9acee228c6:221c45e1-7f65-42c8-afc3-0ccc1d6fc148:/dev/3e541b2d-2a49-4eb8-ae4b-aa9acee228c6/leases:109051904
-p 32265
release pid 32265
release done 0
$ sanlock direct dump
/dev/3e541b2d-2a49-
Could be this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1555116
Adding Ala
On Thu, May 17, 2018 at 5:00 PM, Marcelo Leandro
wrote:
> Error in engine.log.
>
>
> 2018-05-17 10:58:56,766-03 INFO
> [org.ovirt.engine.core.bll.snapshots.RemoveSnapshotCommand]
> (default task-31) [c4fc9663-51
Do you see this disk on engine side? it should be aware of this disk since
> it created
> the disk during live storage migration.
>
> Also, we should not have leftovers volumes after failed operations. Please
> file a bug
> for this and attach both engine.log and vdsm.log on the host doing the
> li
Which version are you using?
On Sun, 3 Jun 2018, 12:57 Arsène Gschwind,
wrote:
> Hi,
>
> in the UI error log ui.log i do get a lot of those errors:
>
> 2018-06-03 10:57:17,486+02 ERROR
> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
> (default task-52) [] Permutation name:
Are you able to move the disk?
Can you open a bug?
On Sun, Jun 3, 2018 at 1:35 PM, Arsène Gschwind
wrote:
> I'm using version : 4.2.3.8-1.el7 the latest version.
>
>
> On Sun, 2018-06-03 at 12:59 +0300, Benny Zlotnik wrote:
>
> Which version are you using?
>
> On
Hi,
What do you mean by converting the LUN from thin to preallocated?
oVirt creates LVs on top of the LUNs you provide
On Wed, Jun 13, 2018 at 2:05 PM, Albl, Oliver
wrote:
> Hi all,
>
>
>
> I have to move some FC storage domains from thin to preallocated. I
> would set the storage domain to m
is transparent to the oVirt host).
>
>
>
> Besides removing “discard after delete” from the storage domain flags, is
> there anything else I need to take care of on the oVirt side?
>
>
>
> All the best,
>
> Oliver
>
>
>
> *Von:* Benny Zlotnik
> *G
Can you provide full engine and vdsm logs?
On Mon, Jun 18, 2018 at 11:20 AM, wrote:
> Hi,
>
> We're running oVirt 4.1.9 (we cannot upgrade at this time) and we're
> having a major problem in our infrastructure. On friday, a snapshots were
> automatically created on more than 200 VMs and as this
18101223/583d3d
>
>
> El 2018-06-18 09:28, Benny Zlotnik escribió:
>
>> Can you provide full engine and vdsm logs?
>>
>> On Mon, Jun 18, 2018 at 11:20 AM, wrote:
>>
>> Hi,
>>>
>>> We're running oVirt 4.1.9 (we cannot upgrade at this tim
8131825/5550ee
>
> El 2018-06-18 13:19, Benny Zlotnik escribió:
>
>> Can you send the SPM logs as well?
>>
>> On Mon, Jun 18, 2018 at 1:13 PM, wrote:
>>
>> Hi Benny,
>>>
>>> Please find the logs at [1].
>>>
>>> Thank you.
&g
can provide VPN access to our
> infrastructure so you can access and see whateve you need (all hosts, DB,
> etc...).
>
> Right now the machines that keep running work, but once shut down they
> start showing the problem below...
>
> Thank you
>
>
> El 2018-06-18 15:20, Be
ines that keep running work, but once shut down they
> start showing the problem below...
>
> Thank you
>
>
> El 2018-06-18 15:20, Benny Zlotnik escribió:
>
>> I'm having trouble following the errors, I think the SPM changed or
>> the vdsm log from the
You could something like this (IIUC):
dead_snap1_params = types.Snapshot(
description=SNAPSHOT_DESC_1,
persist_memorystate=False,
disk_attachments=[
types.DiskAttachment(
disk=types.Disk(
id=disk.id
)
, 2018 at 4:06 PM Gianluca Cecchi
wrote:
> On Thu, Jun 21, 2018 at 2:00 PM, Benny Zlotnik
> wrote:
>
>> You could something like this (IIUC):
>> dead_snap1_params = types.Snapshot(
>> description=SNAPSHOT_DESC_1,
>> persist_memorystate=F
Perhaps you can query the status of job using the correlation id (taking
the examples from ovirt-system-tests):
dead_snap1_params = types.Snapshot(
description=SNAPSHOT_DESC_1,
persist_memorystate=False,
disk_attachments=[
types.DiskAttachment(
I'm not sure if it's reliable
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1460701
[2] -
https://github.com/oVirt/ovirt-system-tests/blob/master/basic-suite-master/test_utils/__init__.py#L249
On Wed, Jul 18, 2018 at 1:39 PM wrote:
> Hi Benny,
>
> El 2018-07-12 08:50, Benny
I can't write an elaborate response since I am away from my laptop, but a
workaround would be to simply insert the snapshot back to the snapshots
table
You need to locate the snapshot's id in the logs where the failure occured
and use vm's id
insert into snapshots values ('', '', 'ACTIVE',
'OK', '
Can you attach the logs from the original failure that caused the active
snapshot to disappear?
And also add your INSERT command
On Fri, Jul 20, 2018 at 12:08 AM wrote:
> Benny,
>
> Thanks for the response!
>
> I don't think I found the right snapshot ID in the logs, but I was able to
> track do
Can you locate commands with id: 8639a3dc-0064-44b8-84b7-5f733c3fd9b3,
94607c69-77ce-4005-8ed9-a8b7bd40c496 in the command_entities table?
On Mon, Jul 23, 2018 at 4:37 PM Marcelo Leandro
wrote:
> Good morning,
>
> can anyone help me ?
>
> Marcelo Leandro
>
> 2018-06-27 10:53 GMT-03:00 Marcelo Le
Can you attach the vdsm log?
On Wed, Aug 15, 2018 at 5:16 PM Inquirer Guy wrote:
> Adding to the below issue, my NODE01 can see the NFS share i created from
> the ENGINE01 which I don't know how it got through because when I add a
> storage domain from the ovirt engine I still get the error
>
>
It can be done by deleting from the images table:
$ psql -U engine -d engine -c "DELETE FROM images WHERE image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b'";
of course the database should be backed up before doing this
On Fri, Jul 17, 2020 at 6:45 PM Nir Soffer wrote:
>
> On Thu, Jul 16, 202
from images where
image_group_id = ";
As well as
$ psql -U engine -d engine -c "SELECT s.* FROM snapshots s, images i
where i.vm_snapshot_id = s.snapshot_id and i.image_guid =
'6197b30d-0732-4cc7-aef0-12f9f6e9565b';"
On Sun, Jul 19, 2020 at 12:49 PM Benny Zlotnik wrote:
gent-common-1.0.14-1.el7 | | 2020-04-23
> 14:59:20.154023+02 | 2020-07-03 17:33:17.483215+02 |
> | | f
>
> (1 row)
>
>
> Thanks,
> Arsene
>
> On Sun, 2020-07-19 at 16:34 +0300, Benny Zlotnik wrote:
>
it was fixed[1], you need to upgrade to libvirt 6+ and qemu 4.2+
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1785939
On Thu, Jul 23, 2020 at 9:59 AM Henri Aanstoot wrote:
>
>
>
>
> Hi all,
>
> I've got 2 two node setup, image based installs.
> When doing ova exports or generic snapshots,
n | Active VM
>
> creation_date | 2020-04-23 14:59:20.171+02
>
> app_list|
> kernel-3.10.0-957.12.2.el7,xorg-x11-drv-qxl-0.1.5-4.el7.1,kernel-3.10.0-957.12.1.el7,kernel-3.10.0-957.38.1.el7,ovirt-guest-agent-common-1.0.14-1.el7
>
> vm_conf
I think it would be easier to get an answer for this on a ceph mailing
list, but why do you need specifically 12.2.7?
On Wed, Aug 19, 2020 at 4:08 PM wrote:
>
> Hi!
> I have a problem with install ceph-common package(needed for cinderlib
> Managed Block Storage) in oVirt Node 4.4.1 - oVirt doc
The feature is currently in tech preview, but it's being worked on.
The feature page is outdated, but I believe this is what most users
in the mailing list were using. We held off on updating it because the
installation instructions have been a moving target, but it is more
stable now and I will u
;UTF8' lc_collate 'en_US.UTF-8'
> lc_ctype 'en_US.UTF-8';\""
>
> ...and added the following to the end of /var/lib/pgsql/data/pg_hba.conf:
>
> hostcinder engine ::0/0 md5
> hostcinder engi
/dnf.conf, but it doesn't
> seem to be obeyed when running engine-setup. Is there another way that
> I can get engine-setup to use a proxy?
>
> --Mike
>
>
> On 9/30/20 2:19 AM, Benny Zlotnik wrote:
> > When you ran `engine-setup` did you enable cinderlib preview
a
> volume-5419640e-445f-4b3f-a29d-b316ad031b7a
> [root@ovirt4]# rbd --id ovirt info
> rbd.ovirt.data/volume-5419640e-445f-4b3f-a29d-b316ad031b7a
> rbd image 'volume-5419640e-445f-4b3f-a29d-b316ad031b7a':
> size 100 GiB in 25600 objects
> order 22 (4 MiB objects)
>
orage
domain or edit the database table
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1881832#c8
On Wed, Oct 14, 2020 at 3:40 PM Michael Thomas wrote:
>
> On 10/14/20 3:30 AM, Benny Zlotnik wrote:
> > Jeff is right, it's a limitation of kernel rbd, the recommendation is
sorry, accidentally hit send prematurely, the database table is
driver_options, the options are json under driver_options
On Wed, Oct 14, 2020 at 5:32 PM Benny Zlotnik wrote:
>
> Did you attempt to start a VM with this disk and it failed, or you
> didn't try at all? If it's
Do you know why it was stuck?
You can use unlock_entity.sh[1] to unlock the disk
[1]
https://www.ovirt.org/develop/developer-guide/db-issues/helperutilities.html
On Tue, Nov 3, 2020 at 1:38 PM wrote:
> I have a vm that has two disks one active and another disabling when
> trying to migrate th
You mean the disk physically resides on a different storage domain, but
engine sees it on another?
Which version did this happen on?
Do you have the logs from this failure?
On Tue, Nov 3, 2020 at 5:51 PM wrote:
>
>
> I used it but it didn't work The disk is still in locked status
>
> when I run
Which version are you using?
Did this happen more than once for the same disk?
A similar bug was fixed in 4.3.10.1[1]
There is another bug with a similar symptom which occurs very rarely and we
were unable to reproduce it
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1758048
On Mon, Nov 9, 2020
1 - 100 of 307 matches
Mail list logo