I have had a nfs-ganesha mount provision install images for vm's, and
recently I have the mount stall after starting a vm. (even when it does
not access an iso image on the nfs mount)
I have the host ip on a macvtap of the connected interface. The vm is
also having an macvtap on the same inte
I am having a few hosts running just libvirt, among which I can do live
migration. Is there some way or does someone have a script to balance
guests evenly across these hosts? Based on memory usage eg.
So, look at the general server logs. Try and start the qemu command from
the command line, enable verbose logging, increase logging etc etc.
"cannot create PID file: Failed to write pid file" Disk full?
-Original Message-
From: muke101 [mailto:muke...@protonmail.com]
Sent: dinsd
I have had ksm[2] causing a high load (on recent centos7), such things
have been reported before[1]. I suspect that this could be related to
windows vm's. I am having not that many vm's in the current setup, and
this load increased on the hosts with a few windows vm's that are
already generat
Oh to bad. I am currently working on some hybrid solution having mesos
as an orchestrator combined with some kvm/qemu vm's. dcos does not allow
this, and I wondered maybe if your kubernetes was limiting your options
there also.
-Original Message-
Cc: libvirt-users
Subject: RE: EXT: R
What about running tasks/containers directly on the host?
-Original Message-
To: Daniel P. Berrangé
Cc: libvirt-users@redhat.com
Subject: RE: EXT: Re: KVM/QEMU Memory Ballooning
Hi Daniel,
Thank you very much for the quick answer. Now it is clear how this
memballooning driver works
I think it used to work also, maybe one of these latest releases of 7 it
has been removed in favour of using networkmanager. (which I do not
have)
-Original Message-
To: Marc Roos; libvirt-users
Subject: Re: virsh attach-interface auto up
Huh! It seemed to work fine for my CentOS 7
r
https://access.redhat.com/solutions/429653
-Original Message-
To: libvirt-users@redhat.com
Subject: Re: virsh attach-interface auto up
On 8/8/20 9:42 AM, Marc Roos wrote:
>
> I am doing a virsh detach-interface and an attach-interface. Is it
> possible to automatically bring
I am doing a virsh detach-interface and an attach-interface. Is it
possible to automatically bring the interface up after attaching it?
> you must be an ASCII charting demigod. Did you use software to make
those, or do
> them yourselves? Either way, I'm impressed...
Search for AsciiArtStudio.exe
Sorry Jerry, I guess you have to investigate this. I do this maybe only
once every two years, I also spend then quite some time getting this to
work. It is probably very similar to an earlier or later release.
Make sure you make snapshots before updating macos. I am always fearing
that somethi
I think that should work. I keep vm's as old as mountain lion, sierra,
highsierra till catalina.
-Original Message-
Cc: libvirt-users
Subject: Re: image of OS X how to boot
Thanks Marc - my OSX is still Yosemite. I dont see that in these files ?
Is support for that available ?
Jerr
Osx, macos only with such hacks
https://github.com/kholia/OSX-KVM
-Original Message-
To: libvirt-users@redhat.com
Subject: image of OS X how to boot
I have an image of OS X that I made with from the physical disk using
dd.
When I boot with libvirt-manager and environment I get an erro
22 juni 2020 1:54
To: libvirt-users@redhat.com
Subject: Re: Feature request? Auto start vm upon next shutdown
On 6/21/20 13:20, Marc Roos wrote:
>
> Sometimes when you change the configuration, this configuration change
> will only be available after a shutdown. Not to have to monitor
Sometimes when you change the configuration, this configuration change
will only be available after a shutdown. Not to have to monitor when the
vm shuts down, to start it again. It could be nice to have libvirt start
it one time. Something like:
1. change the network interface of a running gu
I was trying to see if it is possible to automatically set the mtu size
for the guest. I am using a tunnel with smaller mtu so when I switch
over vm's, it would be nice if they get this tunnel mtu automatically.
This config:
Generates this error:
error:
>===
=
> In Soviet Russia, Google searches you!
>===
=
You can easily remove 'In Soviet Russia, '
Anyone knows how the agent is using the vss service? Where this process
is documented?
-Original Message-
Cc: libvirt-users
Subject: snapshotting disk images with exchange db / volumes
I have been asking at technet about if I could snapshot an exchange db
volume[1]. But it looks li
I have been asking at technet about if I could snapshot an exchange db
volume[1]. But it looks like that windows server backup is sort of doing
the same. Creating vs image, and backup that one. However they truncated
the exchange db log files after a succesful procedure.
I was wondering if s
Link?
-Original Message-
Sent: 21 February 2020 11:50
To: Marc Roos
Cc: pkrempa; libvirt-users
Subject: Re: guest-fsfreeze-freeze freezes all mounted block devices
On Mon, Feb 17, 2020 at 01:52:02PM +0100, Marc Roos wrote:
>
> Hmmm, using 'virsh domfsinfo testdom' giv
Thanks! Will compare it this weekend!
-Original Message-
Cc: libvirt-ML
Subject: Re: can hotplug vcpus to running Windows 10 guest, but not
unplug
- On Feb 15, 2020, at 12:47 AM, Marc Roos m.r...@f1-outsourcing.eu
wrote:
> Would you mind sharing your xml? I have strange h
2596e-517c-11ea-b213-525400e83365
Report Status: 0
-Original Message-
Cc: libvirt-users
Subject: Re: guest-fsfreeze-freeze freezes all mounted block devices
On Mon, Feb 17, 2020 at 10:03:27 +0100, Marc Roos wrote:
> Hi Peter,
>
> Should I assume that the virsh domfsfreeze, d
t-fsfreeze-freeze freezes all mounted block devices
On Fri, Feb 14, 2020 at 22:14:55 +0100, Marc Roos wrote:
>
> I wondered if anyone here can confirm that
>
> virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}'
Note that libvirt implem
Would you mind sharing your xml? I have strange high host load on idle
windows guest/domain
-Original Message-
Sent: 14 February 2020 16:05
To: libvirt-ML
Subject: can hotplug vcpus to running Windows 10 guest, but not unplug
Hi,
i'm playing a bit around with vcpus.
My guest is Win
I wondered if anyone here can confirm that
virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}'
Freezes all mounted block devices filesystems. So if I use 4 block
devices they are all frozen for snapshotting. Or just the root fs?
:0 forgot indeed to check the manpage, sorry
-Original Message-
Cc: libvirt-users
Subject: Re: [libvirt-users] Live migrate with virsh migrate
On Tue, Dec 03, 2019 at 10:50:42 +0100, Marc Roos wrote:
>
>
> When I migrate a guest from svr1 to svr2 and shutdown the guest
When I migrate a guest from svr1 to svr2 and shutdown the guest on svr2.
I cannot start it anymore because the xml is not there. How can I make
sure this xml is automatically stored so I can start the guest again on
svr2?
___
libvirt-users maili
interface name
On 10/17/19 4:26 PM, Marc Roos wrote:
>
> Is it possible to do a live migrate of a guest, having on the from
> host a source_device=eth2 and to host a source_dev=eth1?
What management tool are you using that the syntax is
"source_device=eth2"?
Are you maybe just parap
Is it possible to do a live migrate of a guest, having on the from host
a source_device=eth2 and to host a source_dev=eth1?
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users
I have a host setup for libvirt kvm/qemu vms. And I wonder a bit about
the overhead of the macvtap and how to configure the mtu's properly. To
be able to communicate with the host, I have moved the ip address of the
host from the adapter to a macvtap to allow host communication.
I have the b
30 matches
Mail list logo