When populate pages across a mem boundary at bootup, the page count
populated isn't correct. This is due to mem populated to non-mem
region and ignored.
Pfn range is also wrongly aligned when mem boundary isn't page aligned.
-v2: If xen_do_chunk fail(populate), abort this chunk and any others.
Su
Sorry, pls ignore it. Tab still be translated to space.
δΊ 2012-07-18 11:08, zhenzhong.duan ει:
From c40ea05842fec8f6caa053b2d58f54608ed0835f Mon Sep 17 00:00:00 2001
From: Zhenzhong Duan
Date: Wed, 4 Jul 2012 14:08:10 +0800
Subject: [PATCH] xen: populate right count of pages when across mem
bou
From c40ea05842fec8f6caa053b2d58f54608ed0835f Mon Sep 17 00:00:00 2001
From: Zhenzhong Duan
Date: Wed, 4 Jul 2012 14:08:10 +0800
Subject: [PATCH] xen: populate right count of pages when across mem boundary
When populate pages across a mem boundary at bootup, the page count
populated isn't correct
When populate pages across a mem boundary at bootup, the page count
populated isn't correct. This is due to mem populated to non-mem
region and ignored.
Pfn range is also wrongly aligned when mem boundary isn't page aligned.
-v2: If xen_do_chunk fail(populate), abort this chunk and any others.
S
On 07/17/2012 11:09 PM, Michael S. Tsirkin wrote:
On Fri, Jul 13, 2012 at 04:55:06PM +0800, Asias He wrote:
Hi folks,
[I am resending to fix the broken thread in the previous one.]
This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk
device accelerator. Compared to usersp
On 07/18/2012 03:10 AM, Jeff Moyer wrote:
Asias He writes:
vhost-blk is a in kernel virito-blk device accelerator.
This patch is based on Liu Yuan's implementation with various
improvements and bug fixes. Notably, this patch makes guest notify and
host completion processing in parallel which
From: Nicholas Bellinger
This patch adds the initial code for tcm_vhost, a Vhost level TCM
fabric driver for virtio SCSI initiators into KVM guest.
This code is currently up and running on v3.5-rc2 host+guest along
with the virtio-scsi vdev->scan() patch to allow a proper
scsi_scan_host() to occ
From: Nicholas Bellinger
This patch adds the initial vhost_scsi_ioctl() callers for
VHOST_SCSI_SET_ENDPOINT
and VHOST_SCSI_CLEAR_ENDPOINT respectively, and also adds struct
vhost_vring_target
that is used by tcm_vhost code when locating target ports during qemu setup.
Signed-off-by: Stefan Haj
From: Stefan Hajnoczi
The vhost work queue allows processing to be done in vhost worker thread
context, which uses the owner process mm. Access to the vring and guest
memory is typically only possible from vhost worker context so it is
useful to allow work to be queued directly by users.
Curren
From: Stefan Hajnoczi
In order for other vhost devices to use the VHOST_FEATURES bits the
vhost-net specific bits need to be moved to their own VHOST_NET_FEATURES
constant.
(Asias: Update drivers/vhost/test.c to use VHOST_NET_FEATURES)
Signed-off-by: Stefan Hajnoczi
Cc: Zhi Yong Wu
Cc: Michae
From: Nicholas Bellinger
Hi folks,
The following is the RFC-v3 series of tcm_vhost target fabric driver code
currently in-flight for-3.6 mainline code.
With the merge window opening soon, the tcm_vhost code has started seeing
time in linux-next. The v2 -> v3 changelog from the last week is cur
On Wed, 2012-07-18 at 02:11 +0300, Michael S. Tsirkin wrote:
> On Tue, Jul 17, 2012 at 03:37:20PM -0700, Nicholas A. Bellinger wrote:
> > On Wed, 2012-07-18 at 01:18 +0300, Michael S. Tsirkin wrote:
> > > On Tue, Jul 17, 2012 at 03:02:08PM -0700, Nicholas A. Bellinger wrote:
> > > > On Wed, 2012-07
On Tue, Jul 17, 2012 at 03:37:20PM -0700, Nicholas A. Bellinger wrote:
> On Wed, 2012-07-18 at 01:18 +0300, Michael S. Tsirkin wrote:
> > On Tue, Jul 17, 2012 at 03:02:08PM -0700, Nicholas A. Bellinger wrote:
> > > On Wed, 2012-07-18 at 00:34 +0300, Michael S. Tsirkin wrote:
> > > > On Tue, Jul 17,
On Wed, 2012-07-18 at 01:18 +0300, Michael S. Tsirkin wrote:
> On Tue, Jul 17, 2012 at 03:02:08PM -0700, Nicholas A. Bellinger wrote:
> > On Wed, 2012-07-18 at 00:34 +0300, Michael S. Tsirkin wrote:
> > > On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
> > > > On Tue, 2012-07
On Tue, Jul 17, 2012 at 03:02:08PM -0700, Nicholas A. Bellinger wrote:
> On Wed, 2012-07-18 at 00:34 +0300, Michael S. Tsirkin wrote:
> > On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
> > > On Tue, 2012-07-17 at 18:05 +0300, Michael S. Tsirkin wrote:
> > > > On Wed, Jul 11,
On Wed, 2012-07-18 at 00:58 +0300, Michael S. Tsirkin wrote:
> On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
> > Wrt to staging, I'd like to avoid mucking with staging because:
> >
> > *) The code has been posted for review
> > *) The code has been converted to use the lat
On Wed, 2012-07-18 at 00:34 +0300, Michael S. Tsirkin wrote:
> On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
> > On Tue, 2012-07-17 at 18:05 +0300, Michael S. Tsirkin wrote:
> > > On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
> > > > From: Nicholas
On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
> Wrt to staging, I'd like to avoid mucking with staging because:
>
> *) The code has been posted for review
> *) The code has been converted to use the latest target-core primitives
> *) The code does not require cleanups betw
On Tue, 2012-07-17 at 13:55 -0500, Anthony Liguori wrote:
> On 07/17/2012 10:05 AM, Michael S. Tsirkin wrote:
> > On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
> >
> > It still seems not 100% clear whether this driver will have major
> > userspace using it. And if not, i
On Tue, Jul 17, 2012 at 02:17:22PM -0700, Nicholas A. Bellinger wrote:
> On Tue, 2012-07-17 at 18:05 +0300, Michael S. Tsirkin wrote:
> > On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
> > > From: Nicholas Bellinger
> > >
> > > Hi folks,
> > >
> > > The following is a RFC
On Tue, 2012-07-17 at 18:05 +0300, Michael S. Tsirkin wrote:
> On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
> > From: Nicholas Bellinger
> >
> > Hi folks,
> >
> > The following is a RFC-v2 series of tcm_vhost target fabric driver code
> > currently in-flight for-3.6 mai
On Tue, Jul 17, 2012 at 01:55:42PM -0500, Anthony Liguori wrote:
> On 07/17/2012 10:05 AM, Michael S. Tsirkin wrote:
> >On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
> >>From: Nicholas Bellinger
> >>
> >>Hi folks,
> >>
> >>The following is a RFC-v2 series of tcm_vhost targe
On 07/17/2012 10:05 AM, Michael S. Tsirkin wrote:
On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger
Hi folks,
The following is a RFC-v2 series of tcm_vhost target fabric driver code
currently in-flight for-3.6 mainline code.
After last week's deve
This patch is only for testing report purposes and shall be dropped in case of
the rest of this patchset getting accepted for merging.
Signed-off-by: Rafael Aquini
---
drivers/virtio/virtio_balloon.c |1 +
include/linux/vm_event_item.h |2 ++
mm/compaction.c |1 +
m
Besides making balloon pages movable at allocation time and introducing
the necessary primitives to perform balloon page migration/compaction,
this patch also introduces the following locking scheme to provide the
proper synchronization and protection for struct virtio_balloon elements
against conc
Memory fragmentation introduced by ballooning might reduce significantly
the number of 2MB contiguous memory blocks that can be used within a guest,
thus imposing performance penalties associated with the reduced number of
transparent huge pages that could be used by the guest workload.
This patch
This patch introduces the helper functions as well as the necessary changes
to teach compaction and migration bits how to cope with pages which are
part of a guest memory balloon, in order to make them movable by memory
compaction procedures.
Signed-off-by: Rafael Aquini
---
include/linux/mm.h |
On Tue, Jul 17, 2012 at 09:19:26AM -0700, Joe Perches wrote:
> On Tue, 2012-07-17 at 09:04 -0700, Greg KH wrote:
> > On Sat, Jul 14, 2012 at 01:34:06PM -0700, K. Y. Srinivasan wrote:
> > > Format GUIDS as per MSFT standard. This makes interacting with MSFT
> > > tool stack easier.
> []
> > > diff -
On Tue, 2012-07-17 at 09:04 -0700, Greg KH wrote:
> On Sat, Jul 14, 2012 at 01:34:06PM -0700, K. Y. Srinivasan wrote:
> > Format GUIDS as per MSFT standard. This makes interacting with MSFT
> > tool stack easier.
[]
> > diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
[]
> > @@ -147,7 +
On Fri, Jul 13, 2012 at 04:55:06PM +0800, Asias He wrote:
>
> Hi folks,
>
> [I am resending to fix the broken thread in the previous one.]
>
> This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk
> device accelerator. Compared to userspace virtio-blk implementation, vhost-bl
On Wed, Jul 11, 2012 at 09:15:00PM +, Nicholas A. Bellinger wrote:
> From: Nicholas Bellinger
>
> Hi folks,
>
> The following is a RFC-v2 series of tcm_vhost target fabric driver code
> currently in-flight for-3.6 mainline code.
>
> After last week's developments along with the help of some
On Fri, Jul 13, 2012 at 04:31:21PM +0800, zhenzhong.duan wrote:
> When populate pages across a mem boundary at bootup, the page count
> populated isn't correct. This is due to mem populated to non-mem
> region and ignored.
>
> Pfn range is also wrongly aligned when mem boundary isn't page aligned.
On Tue, Jul 17, 2012 at 03:02:45PM +0200, Paolo Bonzini wrote:
> Il 17/07/2012 14:48, Michael S. Tsirkin ha scritto:
> > On Tue, Jul 17, 2012 at 01:03:39PM +0100, Stefan Hajnoczi wrote:
> >> On Tue, Jul 17, 2012 at 12:54 PM, Michael S. Tsirkin
> >> wrote:
> Knowing the answer to that is impo
Il 17/07/2012 14:48, Michael S. Tsirkin ha scritto:
> On Tue, Jul 17, 2012 at 01:03:39PM +0100, Stefan Hajnoczi wrote:
>> On Tue, Jul 17, 2012 at 12:54 PM, Michael S. Tsirkin wrote:
Knowing the answer to that is important before anyone can say whether
this approach is good or not.
>
On Tue, Jul 17, 2012 at 01:03:39PM +0100, Stefan Hajnoczi wrote:
> On Tue, Jul 17, 2012 at 12:54 PM, Michael S. Tsirkin wrote:
> >> Knowing the answer to that is important before anyone can say whether
> >> this approach is good or not.
> >>
> >> Stefan
> >
> > Why is it?
>
> Because there might
On Tue, Jul 17, 2012 at 12:54 PM, Michael S. Tsirkin wrote:
>> Knowing the answer to that is important before anyone can say whether
>> this approach is good or not.
>>
>> Stefan
>
> Why is it?
Because there might be a fix to kvmtool which closes the gap. It
would be embarassing if vhost-blk was
On Tue, Jul 17, 2012 at 12:42:13PM +0100, Stefan Hajnoczi wrote:
> On Tue, Jul 17, 2012 at 12:26 PM, Michael S. Tsirkin wrote:
> > On Tue, Jul 17, 2012 at 12:11:15PM +0100, Stefan Hajnoczi wrote:
> >> On Tue, Jul 17, 2012 at 10:21 AM, Asias He wrote:
> >> > On 07/17/2012 04:52 PM, Paolo Bonzini w
On Tue, Jul 17, 2012 at 12:42 PM, Stefan Hajnoczi wrote:
> On Tue, Jul 17, 2012 at 12:26 PM, Michael S. Tsirkin wrote:
>> On Tue, Jul 17, 2012 at 12:11:15PM +0100, Stefan Hajnoczi wrote:
>>> On Tue, Jul 17, 2012 at 10:21 AM, Asias He wrote:
>>> > On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
>>>
On Tue, Jul 17, 2012 at 12:26 PM, Michael S. Tsirkin wrote:
> On Tue, Jul 17, 2012 at 12:11:15PM +0100, Stefan Hajnoczi wrote:
>> On Tue, Jul 17, 2012 at 10:21 AM, Asias He wrote:
>> > On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
>> >>
>> >> Il 17/07/2012 10:29, Asias He ha scritto:
>> >>>
>> >>>
On Tue, Jul 17, 2012 at 9:29 AM, Asias He wrote:
> On 07/16/2012 07:58 PM, Stefan Hajnoczi wrote:
>> Does the vhost-blk implementation do anything fundamentally different
>> from userspace? Where is the overhead that userspace virtio-blk has?
>
>
>
> Currently, no. But we could play with bio dire
On Tue, Jul 17, 2012 at 12:11:15PM +0100, Stefan Hajnoczi wrote:
> On Tue, Jul 17, 2012 at 10:21 AM, Asias He wrote:
> > On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
> >>
> >> Il 17/07/2012 10:29, Asias He ha scritto:
> >>>
> >>> So, vhost-blk at least saves ~6 syscalls for us in each request.
> >
On Tue, Jul 17, 2012 at 10:21 AM, Asias He wrote:
> On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
>>
>> Il 17/07/2012 10:29, Asias He ha scritto:
>>>
>>> So, vhost-blk at least saves ~6 syscalls for us in each request.
>>
>>
>> Are they really 6? If I/O is coalesced by a factor of 3, for example
>
On Tue, Jul 17, 2012 at 12:56:31PM +0200, Paolo Bonzini wrote:
> Il 17/07/2012 12:49, Michael S. Tsirkin ha scritto:
> >> Ok, that would make more sense. One difference between vhost-blk and
> >> vhost-net is that for vhost-blk there are also management actions that
> >> would trigger the switch,
Il 17/07/2012 12:49, Michael S. Tsirkin ha scritto:
>> Ok, that would make more sense. One difference between vhost-blk and
>> vhost-net is that for vhost-blk there are also management actions that
>> would trigger the switch, for example a live snapshot.
>> So a prerequisite for vhost-blk would b
On Tue, Jul 17, 2012 at 12:14:33PM +0200, Paolo Bonzini wrote:
> Il 17/07/2012 11:45, Michael S. Tsirkin ha scritto:
> >> So it begs the question, is it going to be used in production, or just a
> >> useful reference tool?
> >
> > Sticking to raw already makes virtio-blk faster, doesn't it?
> > In
Il 17/07/2012 11:45, Michael S. Tsirkin ha scritto:
>> So it begs the question, is it going to be used in production, or just a
>> useful reference tool?
>
> Sticking to raw already makes virtio-blk faster, doesn't it?
> In that vhost-blk looks to me like just another optimization option.
> Ideall
On Tue, Jul 17, 2012 at 11:32:45AM +0200, Paolo Bonzini wrote:
> Il 17/07/2012 11:21, Asias He ha scritto:
> >> It depends. Like vhost-scsi, vhost-blk has the problem of a crippled
> >> feature set: no support for block device formats, non-raw protocols,
> >> etc. This makes it different from vho
On Tue, Jul 17, 2012 at 10:52:10AM +0200, Paolo Bonzini wrote:
> Il 17/07/2012 10:29, Asias He ha scritto:
> > So, vhost-blk at least saves ~6 syscalls for us in each request.
>
> Are they really 6? If I/O is coalesced by a factor of 3, for example
> (i.e. each exit processes 3 requests), it's r
Il 17/07/2012 11:21, Asias He ha scritto:
>> It depends. Like vhost-scsi, vhost-blk has the problem of a crippled
>> feature set: no support for block device formats, non-raw protocols,
>> etc. This makes it different from vhost-net.
>
> Data-plane qemu also has this cripppled feature set proble
On 07/17/2012 04:52 PM, Paolo Bonzini wrote:
Il 17/07/2012 10:29, Asias He ha scritto:
So, vhost-blk at least saves ~6 syscalls for us in each request.
Are they really 6? If I/O is coalesced by a factor of 3, for example
(i.e. each exit processes 3 requests), it's really 2 syscalls per reques
Il 17/07/2012 10:29, Asias He ha scritto:
> So, vhost-blk at least saves ~6 syscalls for us in each request.
Are they really 6? If I/O is coalesced by a factor of 3, for example
(i.e. each exit processes 3 requests), it's really 2 syscalls per request.
Also, is there anything we can improve? P
On 07/16/2012 07:58 PM, Stefan Hajnoczi wrote:
On Thu, Jul 12, 2012 at 4:35 PM, Asias He wrote:
This patchset adds vhost-blk support. vhost-blk is a in kernel virito-blk
device accelerator. Compared to userspace virtio-blk implementation, vhost-blk
gives about 5% to 15% performance improvement.
52 matches
Mail list logo