On 01/07/15 15:46, Roger Pau Monne wrote:
> This new elfnote contains the 32bit entry point into the kernel. Xen will
> use this entry point in order to launch the guest kernel in 32bit protected
> mode with paging disabled.
[...]
> --- a/tools/xcutils/readnotes.c
> +++ b/tools/xcutils/readnotes.c
On 03/07/15 12:34, Roger Pau Monne wrote:
>
> And for the FreeBSD part:
Have you thought at all about what the Linux support for this mode would
look like?
I started briefly looking today but don't really have time to look into
it properly. I do think we want to use as much of the native/HVM co
On 08/07/15 09:56, Jan Beulich wrote:
> Rather than assuming only PV guests need special treatment (and
> dealing with that directly when an IRQ gets set up), keep all guest MSI
> IRQs masked until either the (HVM) guest unmasks them via vMSI or the
> (PV, PVHVM, or PVH) guest sets up an event chan
On 08/07/15 11:45, Jan Beulich wrote:
> David,
>
> I'm afraid we'll need another fixup here, even if things build fine
> despite the removal.
Ah, we get a generic implementation instead. Thanks for pointing this
out. I'll fix it.
David
___
Xen-devel
On 08/07/15 06:54, Rajat Jain wrote:
> Eliminate the module_init function by using module_pci_driver()
This is not equivalent since this adds a useless module_exit() function.
David
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.or
On 08/07/15 11:58, Jan Beulich wrote:
On 08.07.15 at 11:39, wrote:
>> On 08/07/15 09:56, Jan Beulich wrote:
>>> Rather than assuming only PV guests need special treatment (and
>>> dealing with that directly when an IRQ gets set up), keep all guest MSI
>>> IRQs masked until either the (HVM) gu
On 07/07/15 04:34, Boris Ostrovsky wrote:
>
> A set of PVH-related patches.
>
> The first patch is x86-64 ABI fix for PVH guests. The second is a small update
> that removes redundant memset (both on PV and PVH code paths)
>
> The rest is to enable non-privileged 32-bit PVH guests. This requires
implementation.
Signed-off-by: David Vrabel
---
xen/include/asm-arm/spinlock.h | 6 ++
xen/include/asm-x86/spinlock.h | 7 +++
xen/include/xen/spinlock.h | 1 +
3 files changed, 14 insertions(+)
create mode 100644 xen/include/asm-arm/spinlock.h
create mode 100644 xen/include/asm-x86
On 09/07/15 12:31, Paul Durrant wrote:
>
> The problem is that (at least my version of) git format-patch --notes
> refuses to put notes after the ---, but I'll resort to hand editing
> the patches before I post them.
You can add the changes to the commit message itself, then got
format-patch/send
On 08/05/15 14:34, Jan Beulich wrote:
> David,
>
> now that we're putting Xen 4.4.x underneath an older distro (SLE11)
> we've got to see that kexec doesn't work there. Initial investigation
> of our kexec person revealed that the destinations attempted to be
> written to by kexec_reloc()'s code f
On 07/05/15 14:57, Ian Campbell wrote:
>
> DECLARE_HYPERCALL_BUFFER_SHADOW seems to currently be unused, David
> added it in 60572c972b8d, I suspect to be used by migration v2, although
> perhaps it never was (at least not in tree yet).
It's used by kexec-tools.
David
__
On 08/05/15 22:14, Andrew Cooper wrote:
> When restoring a domain, check for unknown options in Image Header. Nothing
> good will come from attempting to continue.
>
> Signed-off-by: Andrew Cooper
> CC: David Vrabel
> CC: Ian Campbell
> CC: Ian Jackson
> CC: Wei
On 11/05/15 10:25, Jan Beulich wrote:
On 08.05.15 at 17:53, wrote:
>> On 08/05/15 14:34, Jan Beulich wrote:
>>> now that we're putting Xen 4.4.x underneath an older distro (SLE11)
>>> we've got to see that kexec doesn't work there. Initial investigation
>>> of our kexec person revealed that t
On 08/05/15 22:14, Andrew Cooper wrote:
> When restoring a domain, check for unknown options in Image Header. Nothing
> good will come from attempting to continue.
Section 2.3 (Fields) explicitly states that "Padding and reserved fields
are set to zero on save and must be ignored during restore".
On 11/05/15 10:23, David Vrabel wrote:
>
> The image header provides information about the format of the image. I
> suppose one could make an argument that an checkpointed stream is an
> infinite stream vs finite, but I think a CHECKPOINT_ENABLE record might
> be better?
On 08/05/15 10:36, Jan Beulich wrote:
>>
>> +}
>> +}
>> smp_mb();
>> }
>
> The old code had smp_mb() before _and_ after the check - is it really
> correct to drop the one before (or effectively replace it by smp_rmb()
> in observe_{lock,head}())?
Typical usage is:
d->is_dyi
On 08/05/15 10:47, Jan Beulich wrote:
On 30.04.15 at 17:33, wrote:
>> Pack struct hvm_domain to reduce it by 8 bytes. Thus reducing the
>> size of struct domain by 8 bytes.
>
> Is that really true _after_ the change to ticket locks?
Yes.
>> @@ -137,6 +131,12 @@ struct hvm_domain {
>>
Pack struct hvm_domain to reduce it by 8 bytes. Thus reducing the
size of struct domain by 8 bytes.
Signed-off-by: David Vrabel
---
xen/include/asm-x86/hvm/domain.h | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include
add_sized(ptr, inc) adds inc to the value at ptr using only the correct
size of loads and stores for the type of *ptr. The add is /not/ atomic.
This is needed for ticket locks to ensure the increment of the head ticket
does not affect the tail ticket.
Signed-off-by: David Vrabel
---
xen
add_sized(ptr, inc) adds inc to the value at ptr using only the correct
size of loads and stores for the type of *ptr. The add is /not/ atomic.
This is needed for ticket locks to ensure the increment of the head ticket
does not affect the tail ticket.
Signed-off-by: David Vrabel
Acked-by: Ian
Now that all architecture use a common ticket lock implementation for
spinlocks, remove the architecture specific byte lock implementations.
Signed-off-by: David Vrabel
Reviewed-by: Tim Deegan
Acked-by: Jan Beulich
Acked-by: Ian Campbell
---
xen/arch/arm/README.LinuxPrimitives | 28
same lock with a higher ticket.
Architectures need only provide arch_fetch_and_add() and two barriers:
arch_lock_acquire_barrier() and arch_lock_release_barrier().
Signed-off-by: David Vrabel
---
xen/common/spinlock.c| 118 --
xen/include/asm-arm
Use ticket locks for spin locks instead of the current byte locks.
Ticket locks are fair. This is an important property for hypervisor
locks.
Note that spin_lock_irq() and spin_lock_irqsave() now spin with irqs
disabled (previously, they would spin with irqs enabled if possible).
This is required
On 11/05/15 15:37, David Vrabel wrote:
> add_sized(ptr, inc) adds inc to the value at ptr using only the correct
> size of loads and stores for the type of *ptr. The add is /not/ atomic.
>
> This is needed for ticket locks to ensure the increment of the head ticket
> does not
On 12/05/15 10:35, Andrew Cooper wrote:
> Checkpointed streams need to signal the end of a consistent view of VM state,
> and the start of the libxl data.
[...]
> --- a/docs/specs/libxc-migration-stream.pandoc
> +++ b/docs/specs/libxc-migration-stream.pandoc
[...]
> @@ -578,6 +578,23 @@ The verify
On 05/05/15 13:34, Jan Beulich wrote:
On 30.04.15 at 15:28, wrote:
>> From: Malcolm Crossley
>>
>> Performance analysis of aggregate network throughput with many VMs
>> shows that performance is signficantly limited by contention on the
>> maptrack lock when obtaining/releasing maptrack hand
On 12/05/15 12:37, Jan Beulich wrote:
On 12.05.15 at 13:01, wrote:
>> On 05/05/15 13:34, Jan Beulich wrote:
>> On 30.04.15 at 15:28, wrote:
@@ -1430,6 +1456,17 @@ gnttab_setup_table(
gt = d->grant_table;
write_lock(>->lock);
+/* Tracking of mapped
The series makes the grant table locking more fine-grained and adds
per-VCPU maptrack free lists, which greatly improves scalability.
The series builds on the original series by Matt Wilson and Christoph
Egger from Amazon.
Performance results for aggregate intrahost network throughput
(between 20
maptrack handle get/put, significantly improving performance
with many concurrent map/unmap operations.
Signed-off-by: Matt Wilson
[chegger: ported to xen-staging, split into multiple commits]
Signed-off-by: Christoph Egger
Signed-off-by: David Vrabel
---
docs/misc/grant-tables.txt | 25
k limit is multiplied by the number of VCPUs. This
ensures that a worst case domain that only performs grant table
operations via one VCPU will not see a lower map track limit.
Signed-off-by: Malcolm Crossley
Signed-off-by: David Vrabel
Acked-by: Tim Deegan
---
xen/common/grant_table.c
.
Signed-off-by: Matt Wilson
[chegger: ported to xen-staging, split into multiple commits]
Signed-off-by: Christoph Egger
Signed-off-by: David Vrabel
---
docs/misc/grant-tables.txt| 31 -
xen/arch/arm/mm.c |4 +-
xen/arch/x86/mm.c |4 +-
xen/common
On 13/05/15 10:23, Jan Beulich wrote:
On 11.05.15 at 16:37, wrote:
>> @@ -53,6 +67,19 @@ void __bad_atomic_size(void);
>> } \
>> })
>>
>> +#define add_sized(p, x) ({\
>> +typeof(*(p)) __x = (x);
On 13/05/15 09:12, Jan Beulich wrote:
On 13.05.15 at 09:35, wrote:
>> Fundamentally if you are transfering control in long mode you have to
>> set up some page table. I giant identity mapped page table that can use
>> 1G or 2M pages takes up very little memory, and can be very simply
>> and
On 12/05/15 18:18, Joao Martins wrote:
>
> Packet I/O Tests:
>
> Measured on a Intel Xeon E5-1650 v2, Xen 4.5, no HT. Used pktgen "burst 1"
> and "clone_skbs 10" (to avoid alloc skb overheads) with various pkt
> sizes. All tests are DomU <-> Dom0, unless specified otherwise.
Are all these me
same lock with a higher ticket.
Architectures need only provide arch_fetch_and_add() and two barriers:
arch_lock_acquire_barrier() and arch_lock_release_barrier().
Signed-off-by: David Vrabel
---
xen/common/spinlock.c| 116 --
xen/include/asm-arm
Use ticket locks for spin locks instead of the current byte locks.
Ticket locks are fair. This is an important property for hypervisor
locks.
Note that spin_lock_irq() and spin_lock_irqsave() now spin with irqs
disabled (previously, they would spin with irqs enabled if possible).
This is required
Now that all architecture use a common ticket lock implementation for
spinlocks, remove the architecture specific byte lock implementations.
Signed-off-by: David Vrabel
Reviewed-by: Tim Deegan
Acked-by: Jan Beulich
Acked-by: Ian Campbell
---
xen/arch/arm/README.LinuxPrimitives | 28
Pack struct hvm_domain to reduce it by 8 bytes. Thus reducing the
size of struct domain by 8 bytes.
Signed-off-by: David Vrabel
---
xen/include/asm-x86/hvm/domain.h | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include
On 14/05/15 12:24, Tim Deegan wrote:
> At 15:37 +0100 on 11 May (1431358625), David Vrabel wrote:
>> Pack struct hvm_domain to reduce it by 8 bytes. Thus reducing the
>> size of struct domain by 8 bytes.
>
> In my builds (non-debug, on current staging), this makes no d
On 12/05/15 11:58, Bob Liu wrote:
> After commit 1b1586eeeb8c ("xenbus_client: Extend interface to
> support multi-page ring"), Linux xenbus driver can support multi-page ring.
>
> Based on this interface, we got some impressive improvements by using
> multi-page
> ring in xen-block driver. If us
On 15/05/15 11:39, Bob Liu wrote:
>
> On 05/15/2015 05:51 PM, David Vrabel wrote:
>> On 12/05/15 11:58, Bob Liu wrote:
>>> After commit 1b1586eeeb8c ("xenbus_client: Extend interface to
>>> support multi-page ring"), Linux xenbus driver can support multi-pa
On 14/05/15 18:00, Julien Grall wrote:
> Hi all,
>
> ARM64 Linux is supporting both 4KB and 64KB page granularity. Although, Xen
> hypercall interface and PV protocol are always based on 4KB page granularity.
>
> Any attempt to boot a Linux guest with 64KB pages enabled will result to a
> guest c
On 07/05/15 17:55, Boris Ostrovsky wrote:
> Commit 2b953a5e994c ("xen: Suspend ticks on all CPUs during suspend")
> introduced xen_arch_suspend() routine but did so only for x86, breaking
> ARM builds.
>
> We need to add it to ARM as well.
Applied to for-linus-4.1b, thanks.
Sorry for the delay,
On 18/05/15 11:16, Jan Beulich wrote:
On 14.05.15 at 13:21, wrote:
>> void _spin_lock(spinlock_t *lock)
>> {
>> +spinlock_tickets_t tickets = { .tail = 1, };
>
> This breaks the build on gcc 4.3.x (due to tail being a member of an
> unnamed structure member of a union).
I don't have a
On 12/05/15 18:18, Joao Martins wrote:
> Refactors a little bit how grants are stored by moving
> grant_rx_ref/grant_tx_ref and grant_tx_page to its
> own structure, namely struct grant.
Reviewed-by: David Vrabel
Although...
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net
On 12/05/15 18:18, Joao Martins wrote:
> Refactors how grants are claimed/released/revoked by moving that code
> into claim_grant and release_grant helpers routines that can be shared
> in both TX/RX path.
Reviewed-by: David Vrabel
But should this be generic? Is it useful to other
On 12/05/15 18:18, Joao Martins wrote:
> "feature-persistent" check on xenbus for persistent grants
> support on the backend.
You can't expose/check for this feature until you actually support it.
This should probably be the last patch.
David
___
Xen-d
On 12/05/15 18:18, Joao Martins wrote:
> Instead of grant/revoking the buffer related to the skb, it will use
> an already granted page and memcpy to it. The grants will be mapped
> by xen-netback and reused overtime, but only unmapped when the vif
> disconnects, as opposed to every packet.
>
> T
On 12/05/15 18:18, Joao Martins wrote:
> It allows a newly allocated skb to reuse the gref taken from the
> pending_ring, which means xennet will grant the pages once and release
> them only when freeing the device. It changes how netfront handles news
> skbs to be able to reuse the allocated pages
On 20/04/15 06:23, Juergen Gross wrote:
> Support 64 bit pv-domains with more than 512GB of memory.
Reviewed-by: David Vrabel
I'll try and queue this for 4.2.
David
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
On 19/05/15 11:20, Joao Martins wrote:
>
> On 18 May 2015, at 17:55, David Vrabel wrote:
>> On 12/05/15 18:18, Joao Martins wrote:
>>> Instead of grant/revoking the buffer related to the skb, it will use
>>> an already granted page and memcpy to it. The grants wil
On 18/05/15 16:49, Jan Beulich wrote:
On 18.05.15 at 17:33, wrote:
>> On 18/05/15 11:16, Jan Beulich wrote:
>> On 14.05.15 at 13:21, wrote:
void _spin_lock(spinlock_t *lock)
{
+spinlock_tickets_t tickets = { .tail = 1, };
>>>
>>> This breaks the build on gcc 4.3.x (d
).
Signed-off-by: David Vrabel
---
xen/common/spinlock.c |2 +-
xen/include/xen/spinlock.h |2 ++
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index c8dc8ba..29149d1 100644
--- a/xen/common/spinlock.c
+++ b/xen/common
On 19/05/15 09:02, Jan Beulich wrote:
>
> Which then of course raises the question - is taking the read lock
> here and in several other places really sufficient? The thing that the
> original spin lock appears to protect here is not only the grant table
> structure itself, but also the active ent
On 19/05/15 09:27, Jan Beulich wrote:
On 12.05.15 at 16:15, wrote:
>> @@ -546,15 +554,28 @@ static void mapcount(
>>
>> *wrc = *rdc = 0;
>>
>> +/*
>> + * N.B.: while taking the local maptrack spinlock prevents any
>> + * mapping changes, the remote active entries could b
On 14/05/15 18:00, Julien Grall wrote:
> Using xen/page.h will be necessary later for using common xen page
> helpers.
>
> As xen/page.h already include asm/xen/page.h, always use the later.
Reviewed-by: David Vrabel
David
___
Xen-devel
On 14/05/15 18:00, Julien Grall wrote:
> SPP was used by the grant table v2 code which has been removed in
> commit 438b33c7145ca8a5131a30c36d8f59bce119a19a "xen/grant-table:
> remove support for V2 tables".
Reviewed-by: David Vrabel
David
ss every time in the
> loop and directly increment the parameter as we don't use it later.
Reviewed-by: David Vrabel
But...
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -379,16 +379,16 @@ int xenbus_grant_ring(struct xenbus_device *
On 14/05/15 18:00, Julien Grall wrote:
> rx->status is an int16_t, print it using %d rather than %u in order to
> have a meaningful value when the field is negative.
Reviewed-by: David Vrabel
David
___
Xen-devel mailing list
Xen-devel@list
On 14/05/15 18:00, Julien Grall wrote:
> With 64KB page granularity support in Linux, a page will be split accross
> multiple MFN (Xen is using 4KB page granularity). Thoses MFNs may not be
> contiguous.
>
> With the offset in the page, the helper will be able to know which MFN
> the driver needs
On 14/05/15 18:00, Julien Grall wrote:
> The xenstore ring is always based on the page granularity of Xen.
[...]
> --- a/drivers/xen/xenbus/xenbus_probe.c
> +++ b/drivers/xen/xenbus/xenbus_probe.c
> @@ -713,7 +713,7 @@ static int __init xenstored_local_init(void)
>
> xen_store_mfn = xen_sta
On 19/05/15 15:49, Boris Ostrovsky wrote:
> On 05/19/2015 10:36 AM, Arnd Bergmann wrote:
>> On Tuesday 19 May 2015 09:57:19 Boris Ostrovsky wrote:
>>> On 05/19/2015 08:58 AM, Arnd Bergmann wrote:
A recent bug fix for x86 broke Xen on ARM for the case that
CONFIG_HIBERNATE_CALLBACKS is ena
On 14/05/15 18:00, Julien Grall wrote:
> For ARM64 guests, Linux is able to support either 64K or 4K page
> granularity. Although, the hypercall interface is always based on 4K
> page granularity.
>
> With 64K page granuliarty, a single page will be spread over multiple
> Xen frame.
>
> When a dr
s sharing the page still there
>
> * event array: always extend the array event by 64K (i.e 16 4K
> chunk). That would require more care when we fail to expand the
> event channel.
I think you want an xen_alloc_page_for_xen() or similar to allocate a 4
KiB size/aligned block.
But
sed.
>
> We could decrease the memory wasted by sharing the page with multiple
> grant. It will require some care with the {Set,Clear}ForeignPage macro.
>
> Note that no changes has been made in the x86 code because both Linux
> and Xen will only use 4KB page granularity
On 14/05/15 18:01, Julien Grall wrote:
> The hypercall interface (as well as the toolstack) is always using 4KB
> page granularity. When the toolstack is asking for mapping a series of
> guest PFN in a batch, it expects to have the page map contiguously in
> its virtual memory.
>
> When Linux is u
0 0 xen-percpu-virq debug1
69: 0 0 xen-dyn-virq xen-pcpu
74: 0 0 xen-dyn-virq mce
75: 29 0 xen-dyn-virq hvc_console
Signed-off-by: David Vrabel
Cc:
---
drivers/tty/hvc/hvc_xen.c|
/enlighten.c |1 +
drivers/tty/hvc/hvc_xen.c|2 +-
drivers/xen/events/events_base.c | 12
include/xen/events.h |2 +-
4 files changed, 11 insertions(+), 6 deletions(-)
Boris Ostrovsky (1):
xen/arm: Define xen_arch_suspend()
David Vrabel
.
Based on a patch originally by Matt Wilson .
Signed-off-by: David Vrabel
---
docs/misc/grant-tables.txt | 42 -
xen/common/grant_table.c | 146
2 files changed, 147 insertions(+), 41 deletions(-)
diff --git a/docs/misc/grant-tables.txt b
Wilson .
Signed-off-by: David Vrabel
---
docs/misc/grant-tables.txt| 30 +
xen/arch/arm/mm.c |4 +-
xen/arch/x86/mm.c |4 +-
xen/common/grant_table.c | 148 ++---
xen/include/xen/grant_table.h |9 ++-
5
Split grant table lock into two separate locks. One to protect
maptrack state (maptrack_lock) and one for everything else (lock).
Based on a patch originally by Matt Wilson .
Signed-off-by: David Vrabel
---
docs/misc/grant-tables.txt|9 +
xen/common/grant_table.c |9
k limit is multiplied by the number of VCPUs. This
ensures that a worst case domain that only performs grant table
operations via one VCPU will not see a lower map track limit.
Signed-off-by: Malcolm Crossley
Signed-off-by: David Vrabel
Acked-by: Tim Deegan
---
xen/common/grant_table.c
The series builds on the original series by Matt Wilson and Christoph
Egger from Amazon.
Performance results for aggregate intrahost network throughput
(between 20 VM pairs, with 16 dom0 VCPUs) show substantial
improvements.
Throughput/Gbit/s
Base
On 21/05/15 11:32, Jan Beulich wrote:
On 20.05.15 at 17:54, wrote:
>> @@ -254,23 +254,23 @@ double_gt_lock(struct grant_table *lgt, struct
>> grant_table *rgt)
>> {
>> if ( lgt < rgt )
>> {
>> -spin_lock(&lgt->lock);
>> -spin_lock(&rgt->lock);
>> +write_loc
On 21/05/15 08:46, Jan Beulich wrote:
On 20.05.15 at 17:54, wrote:
>> @@ -702,6 +729,7 @@ __gnttab_map_grant_ref(
>>
>> cache_flags = (shah->flags & (GTF_PAT | GTF_PWT | GTF_PCD) );
>>
>> +active_entry_release(act);
>> spin_unlock(&rgt->lock);
>
> Just for my understanding:
On 21/05/15 14:36, David Vrabel wrote:
> On 21/05/15 11:32, Jan Beulich wrote:
>>>>> On 20.05.15 at 17:54, wrote:
>>> @@ -842,8 +845,6 @@ __gnttab_map_grant_ref(
>>> mt->ref = op->ref;
>>> mt->flags = op->flags;
>>>
On 21/05/15 15:53, Jan Beulich wrote:
On 21.05.15 at 15:36, wrote:
>> On 21/05/15 11:32, Jan Beulich wrote:
>> On 20.05.15 at 17:54, wrote:
@@ -827,9 +828,11 @@ __gnttab_map_grant_ref(
if ( (wrc + rdc) == 0 )
err = iommu_map_page(ld, frame, fr
On 22/05/15 07:37, Jan Beulich wrote:
On 21.05.15 at 17:16, wrote:
>> On 21/05/15 15:53, Jan Beulich wrote:
>> On 21.05.15 at 15:36, wrote:
On 21/05/15 11:32, Jan Beulich wrote:
On 20.05.15 at 17:54, wrote:
>> @@ -827,9 +828,11 @@ __gnttab_map_grant_ref(
>>
On 22/05/15 12:49, Marek Marczykowski-Górecki wrote:
> Hi all,
>
> I'm experiencing xen-netfront crash when doing xl network-detach while
> some network activity is going on at the same time. It happens only when
> domU has more than one vcpu. Not sure if this matters, but the backend
> is in anot
On 22/05/15 17:42, Marek Marczykowski-Górecki wrote:
> On Fri, May 22, 2015 at 05:25:44PM +0100, David Vrabel wrote:
>> On 22/05/15 12:49, Marek Marczykowski-Górecki wrote:
>>> Hi all,
>>>
>>> I'm experiencing xen-netfront crash when doing xl network-detach
es to delete the
napi instances that have already been freed.
Fix this by fully destroy the queues (which includes deleting the napi
instances) before freeing the netdevice.
Reported-by: Marek Marczykowski
Signed-off-by: David Vrabel
---
drivers/net/xen-netfront.c | 15 ++-
1 fi
.
Based on a patch originally by Matt Wilson .
Signed-off-by: David Vrabel
---
Changes in v10:
- Reduce scope of act in grant_map_exists().
- Make unlock sequence in error paths consistent in
__acquire_grant_for_copy().
- gnt_unlock_out -> to gt_unlock_out in __acquire_grant_for_copy().
---
docs/m
Split grant table lock into two separate locks. One to protect
maptrack state (maptrack_lock) and one for everything else (lock).
Based on a patch originally by Matt Wilson .
Signed-off-by: David Vrabel
---
docs/misc/grant-tables.txt|9 +
xen/common/grant_table.c |9
Wilson .
Signed-off-by: David Vrabel
---
v10:
- In gnttab_map_grant_ref(), keep double lock around maptrack update
if gnttab_need_iommu_mapping(). Use a wmb(), otherwise.
---
docs/misc/grant-tables.txt| 30
xen/arch/arm/mm.c |4 +-
xen/arch/x86/mm.c
ximum number of maptrack frames by 4 times
because: a) struct grant_mapping is now 16 bytes (instead of 8); and
b) a guest may not evenly distribute all the grant map operations
across the VCPUs (meaning some VCPUs need more maptrack entries than
others).
Signed-off-by: Malcolm Crossley
Signed-off-by:
The series builds on the original series by Matt Wilson and Christoph
Egger from Amazon.
Performance results for aggregate intrahost network throughput
(between 20 VM pairs, with 16 dom0 VCPUs) show substantial
improvements.
Throughput/Gbit/s
Base
.
Signed-off-by: David Vrabel
---
drivers/net/xen-netfront.c | 15 ++-
1 file changed, 2 insertions(+), 13 deletions(-)
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 3f45afd..e031c94 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
On 20/04/15 06:23, Juergen Gross wrote:
> 64 bit pv-domains under Xen are limited to 512 GB of RAM today. The
> main reason has been the 3 level p2m tree, which was replaced by the
> virtual mapped linear p2m list. Parallel to the p2m list which is
> being used by the kernel itself there is a 3 lev
On 27/05/15 17:25, David Vrabel wrote:
> On 20/04/15 06:23, Juergen Gross wrote:
>> 64 bit pv-domains under Xen are limited to 512 GB of RAM today. The
>> main reason has been the 3 level p2m tree, which was replaced by the
>> virtual mapped linear p2m list. Parallel to
On 28/05/15 16:39, Jan Beulich wrote:
On 26.05.15 at 20:00, wrote:
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -57,7 +57,7 @@ integer_param("gnttab_max_frames", max_grant_frames);
>> * New options allow to set max_maptrack_frames and
>> * map_grant_table_fram
On 28/05/15 15:55, Jan Beulich wrote:
On 26.05.15 at 20:00, wrote:
>> @@ -254,23 +254,23 @@ double_gt_lock(struct grant_table *lgt, struct
>> grant_table *rgt)
>> {
>> if ( lgt < rgt )
>> {
>> -spin_lock(&lgt->lock);
>> -spin_lock(&rgt->lock);
>> +write_loc
On 29/05/15 17:24, Ian Campbell wrote:
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -235,6 +235,7 @@ static int netback_remove(struct xenbus_device *dev)
> kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE);
> xen_unregister_watchers(b
On 29/05/15 09:31, Jan Beulich wrote:
On 28.05.15 at 18:09, wrote:
>> On 28/05/15 15:55, Jan Beulich wrote:
>> On 26.05.15 at 20:00, wrote:
@@ -254,23 +254,23 @@ double_gt_lock(struct grant_table *lgt, struct
grant_table *rgt)
{
if ( lgt < rgt )
{
>>>
The series builds on the original series by Matt Wilson and Christoph
Egger from Amazon.
Performance results for aggregate intrahost network throughput
(between 20 VM pairs, with 16 dom0 VCPUs) show substantial
improvements.
Throughput/Gbit/s
Base
Wilson .
Signed-off-by: David Vrabel
---
v11:
- Call gnttab_need_iommu_mapping() once for each lock/unlock.
- Take active entry lock in gnttab_transfer().
- Use read_atomic() when checking maptrack flags for validity (note:
all other reads of map->flags are either after it is validated
Split grant table lock into two separate locks. One to protect
maptrack state (maptrack_lock) and one for everything else (lock).
Based on a patch originally by Matt Wilson .
Signed-off-by: David Vrabel
Reviewed-by: Jan Beulich
---
docs/misc/grant-tables.txt|9 +
xen/common
.
Based on a patch originally by Matt Wilson .
Signed-off-by: David Vrabel
Reviewed-by: Jan Beulich
---
Changes in v11:
- Fix exists test in gnttab_map_exists().
Changes in v10:
- Reduce scope of act in grant_map_exists().
- Make unlock sequence in error paths consistent in
__acquire_grant_for_copy
by 4 times
because: a) struct grant_mapping is now 16 bytes (instead of 8); and
b) a guest may not evenly distribute all the grant map operations
across the VCPUs (meaning some VCPUs need more maptrack entries than
others).
Signed-off-by: Malcolm Crossley
Signed-off-by: David Vrabel
Acked-by
On 02/06/15 17:26, David Vrabel wrote:
> Performance analysis of aggregate network throughput with many VMs
> shows that performance is signficantly limited by contention on the
> maptrack lock when obtaining/releasing maptrack handles from the free
> list.
Sometime during all this re
On 22/06/16 11:54, David Vrabel wrote:
> On 21/06/16 20:31, Boris Ostrovsky wrote:
>> On 06/21/2016 12:09 PM, David Vrabel wrote:
>>> When page tables entries are set using xen_set_pte_init() during early
>>> boot there is no page fault handler that could handle a fault
401 - 500 of 1518 matches
Mail list logo