With kvm-88, a configure line such as (see output below)
# ./configure --prefix=/path/to/intall/dir --target-list=x86_64-softmmu
--kerneldir=/lib/modules/2.6.30/build
yields the below build failure of the kvm kernel sources (I think that in
earlier version there was a configure directive to disa
> include/linux/mmzone.h:18:26: error: linux/bounds.h: No such file or directory
> include/linux/mmzone.h:256:5: warning: "MAX_NR_ZONES" is not defined
okay, my problem was that for some reason the auto generated linux/bounds.h
file got cleaned from this system, now I can build that.
Or.
--
To un
Michael S. Tsirkin wrote:
> Well, I definitely see some gain in latency.
> Here's a simple test over a 1G ethernet link (host to guest):
> Native:
> 126976 126976 11 10.0010393.23
> vhost virtio:
> 126976 126976 11 10.008169.58
> Userspace virtio:
> 126976 126
Michael S. Tsirkin wrote:
> The patches are against 2.6.31-rc4. I'd like them to go into linux-next
> and down the road 2.6.32 if possible. Please comment.
Hi Michael,
Just wanted to make sure with you how this can be tested, is 2.6.31-rc4
plus these two patches enough to form the kernel part?
Michael S. Tsirkin wrote:
No, these patches are on top of Avi's kvm.git
so are they on top of some branch in Avi's kvm.git which is planned to
be merged for 2.6.32? what branch should I use?
Or.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to m
Michael S. Tsirkin wrote:
Yes. master
okay, will get testing this later next week. Any chance you can provide
some packet-per-second numbers (netperf udp stream with small packets)?
Or.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vge
Sridhar Samudrala wrote:
On Wed, 2010-01-27 at 22:39 +0100, Arnd Bergmann wrote:
we already have -net socket,fd and any user that passes an fd into
that already knows what he wants to do with it. Making it work with
raw sockets is just a natural extension to this
Didn't realize that -net socke
Anthony Liguori wrote:
Considering VEPA enabled hardware doesn't exist today and the
standards aren't even finished being defined, I don't think it's a
really strong use case ;-)
Anthony,
VEPA enabled NIC hardware is live and kicking, maybe even @ your onboard
1Gbs NIC: the intel 82576 (<--
on an external library
- we have access to the underlying file descriptor
which makes it possible to connect to vhost net
- don't support polling all interfaces, always bind to a specific one
Signed-off-by: Or Gerlitz
Signed-off-by: Michael S. Tsirkin
---
hw/virtio-net.c |3 +-
net.c
On 7/8/2015 6:18 PM, Paolo Bonzini wrote:
This part of the MTRR patches was dropped by Xiao. Bring SVM on feature
parity with VMX, and then do guest MTRR virtualization for both VMX and SVM.
The IPAT bit of VMX extended page tables is emulated by mangling the guest
PAT value.
I do not have any
On Wed, Oct 21, 2015 at 7:37 PM, Lan Tianyu wrote:
> This patchset is to propose a new solution to add live migration support
> for 82599 SRIOV network card.
> In our solution, we prefer to put all device specific operation into VF and
> PF driver and make code in the Qemu more general.
[...]
>
On Wed, Oct 21, 2015 at 10:20 PM, Alex Williamson
wrote:
> This is why the typical VF agnostic approach here is to using bonding
> and fail over to a emulated device during migration, so performance
> suffers, but downtime is something acceptable.
bonding in the VM isn't a zero touch solution, r
12 matches
Mail list logo