in libvirt: src/cpu/cpu_map.xml
they a are a cpu list definition, with standard cpu but also custom cpu
...
--> this one is the default used by rhev, they have enabled the lahf_lm flag
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux
>>How does libvirt handle that?
AFAIK, libvirt don't choose for you the best cpu type for a specific os.
But maybe ovirt,rhev,openstack are doing it in their code.
I'll try to have a look at them.
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre Derumier" , pve-devel@pve.proxmox
How does libvirt handle that?
> see
> http://forum.proxmox.com/threads/16206-Windows-Server-2012-R2-and-
> 0x005D-Error
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> We've found another bug in proxmox. When using the proxmox web interface to
> clone a template or existing box, the pool specified is completely ignored.
> The
> web api works correctly.
Thanks for the bug report.
Would you mind to subscribe to the list? Else I need to manually confirm any of
We've found another bug in proxmox. When using the proxmox web interface
to clone a template or existing box, the pool specified is completely
ignored. The web api works correctly.
Please find attached a patch to correct this problem.
Jort
--
Jort Bloem
Technical Engineer - Auckland
Busines
see
http://forum.proxmox.com/threads/16206-Windows-Server-2012-R2-and-0x005D-Error
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 19 ---
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 50b774d..7392a49 1
this version used qemu64 if kvm64 is defined.
Other cpu models have the flag defined needed for windows 8.1.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
No, it is hard coded and quite small.
But that mbuffer looks promising - maybe we can use much larger
buffers (same size as LVM snapshot size), maybe mmap'ed?
To clarify, are you are suggesting to make the existing hard coded
buffer larger/configurable?
If so, I like this idea. It seems like t
> > That is how it works already.
> Is the size of the buffer configurable?
> I would like to use 4-8G of RAM
No, it is hard coded and quite small.
But that mbuffer looks promising - maybe we can use much larger
buffers (same size as LVM snapshot size), maybe mmap'ed?
Would be great if you can r
That is how it works already.
Is the size of the buffer configurable?
I would like to use 4-8G of RAM
Anyways, I will try to upgrade KVM to 1.7 first (many backup related changes).
We can then test again and try to optimize further.
Sounds like a plan
_
> I have a suggestion that would help alleviate the read and write downsides to
> this.
>
> Create a memory buffer where the reads/writes from the VM are placed.
> When buffer is over a certain percentage, stop the backup read operations and
> flush the buffer.
> The VM can perform IO up to the li
There is also a small possibility that we have a bug ;-) I will debug that
when I update that code for 1.7.
Looking at the code, it seems that we also backup read blocks immediately. That
way we
can avoid re-reads.
I am not sure if that is good or bad.
This would explain the degraded read p
>>I plan to start working on that now - do you already have some patches?
No, sorry, I was too busy, tryng to finish zfs nexenta patch and also local
storage migration.
(I also try to test your ceph pve-manager patch this week)
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DE
>>The tests form Eric only do reads (there is no single write involved).
Oh, I miss that.
I think it should be a qemu problem, as only difference is that
with lvm snapshot backup,
backup reads are done directly from disk
and with qemu backup
reads are done through qemu
Maybe qemu has more o
14 matches
Mail list logo