From: Greg KH [g...@kroah.com]
Sent: Wednesday, November 03, 2010 7:36 AM
> But you should. Remember, be consistent.
> Care to redo these again?
Sure, I will update the patches.
Thanks,
- Haiyang
___
Virtualization mailing list
Virtualization@lists.lin
Use newly added for_each_console for iterating consoles.
Signed-off-by: Jiri Slaby
Cc: Jeremy Fitzhardinge
Cc: Chris Wright
Cc: virtualizat...@lists.osdl.org
Cc: xen-de...@lists.xensource.com
Cc: linux-fb...@vger.kernel.org
---
drivers/video/xen-fbfront.c |2 +-
1 files changed, 1 insertio
On 11/03/2010 01:03 PM, Ian Molton wrote:
>
> The virtio driver enfoces the PID field and understands the packet
> format used. Its better than using serial. Its also just one driver -
> which doesnt have any special interdependencies and can be extended or
> got rid of in future if and when bet
On 01/11/10 13:28, Anthony Liguori wrote:
> On 11/01/2010 06:53 AM, Alon Levy wrote:
>> While we (speaking as part of the SPICE developers) want to have the same
>> support in our virtual GPU for 3d as we have for 2d, we just don't at
>> this point of time.
Would it be helpful to you to have /som
On 11/03/2010 11:13 AM, Eric Dumazet wrote:
> Le mercredi 03 novembre 2010 à 10:59 -0400, Jeremy Fitzhardinge a
> écrit :
>> From: Jeremy Fitzhardinge
>>
>> If we don't need to use a locked inc for unlock, then implement it in C.
>>
>> Signed-off-by: Jeremy Fitzhardinge
>> ---
>> arch/x86/includ
On 29/10/10 12:18, Rusty Russell wrote:
> On Wed, 27 Oct 2010 11:30:31 pm Ian Molton wrote:
>> On 19/10/10 11:39, Avi Kivity wrote:
>>> On 10/19/2010 12:31 PM, Ian Molton wrote:
>>
> 2. should start with a patch to the virtio-pci spec to document what
> you're doing
Where can I fi
On 01/11/10 15:57, Anthony Liguori wrote:
>> It very much is. It supports fully visually integrated rendering (no
>> overlay windows) and even compositing GL window managers work fine,
>> even if running 3D apps under them.
>
> Does the kernel track userspace pid and pass that information to qemu?
From: Jeremy Fitzhardinge
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock_types.h |3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/spinlock_types.h
b/arch/x86/include/asm/spinlock_types.h
index b396ed5..def8010 100644
--- a/ar
On 11/03/2010 11:11 AM, Eric Dumazet wrote:
> Le mercredi 03 novembre 2010 à 10:59 -0400, Jeremy Fitzhardinge a
> écrit :
>> From: Jeremy Fitzhardinge
>>
>> The inner loop of __ticket_spin_lock isn't doing anything very special,
>> so reimplement it in C.
>>
>> For the 8 bit ticket lock variant, w
Le mercredi 03 novembre 2010 à 10:59 -0400, Jeremy Fitzhardinge a
écrit :
> From: Jeremy Fitzhardinge
>
> If we don't need to use a locked inc for unlock, then implement it in C.
>
> Signed-off-by: Jeremy Fitzhardinge
> ---
> arch/x86/include/asm/spinlock.h | 33 ++---
Le mercredi 03 novembre 2010 à 10:59 -0400, Jeremy Fitzhardinge a
écrit :
> From: Jeremy Fitzhardinge
>
> The inner loop of __ticket_spin_lock isn't doing anything very special,
> so reimplement it in C.
>
> For the 8 bit ticket lock variant, we use a register union to get direct
> access to the
From: Jeremy Fitzhardinge
Aside from the particular form of the xadd instruction, they're identical.
So factor out the xadd and use common code for the rest.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 42 ++
1 files changed, 2
From: Jeremy Fitzhardinge
Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.
xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the ch
From: Jeremy Fitzhardinge
When reading the 'waiting' counter, use a longer-than-necessary read
which also overlaps 'head'. This read is guaranteed to be in-order
with respect to and unlock writes to 'head', thereby eliminating the
need for an explicit mb() to enforce the read-after-write orderin
From: Jeremy Fitzhardinge
When a CPU blocks by calling into __ticket_lock_spinning, keep a count in
the spinlock. This allows __ticket_lock_kick to more accurately tell
whether it has any work to do (in many cases, a spinlock may be contended,
but none of the waiters have gone into blocking).
T
From: Jeremy Fitzhardinge
Make it clearer what fields head_tail is actually overlapping with.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h |2 +-
arch/x86/include/asm/spinlock_types.h |4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git
From: Jeremy Fitzhardinge
It's only necessary to prevent the compiler from reordering code out of
the locked region past the unlock. Putting too many barriers in prevents
the compiler from generating the best code when PV spinlocks are enabled.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/
From: Jeremy Fitzhardinge
If we don't put a real memory barrier between the unlocking increment
of the queue head and the check for lock waiters, we can end up with a
deadlock as a result of the unload write being reordered with respect
to the waiters read. In effect, the check on the waiters co
On 13:46 Mon 01 Nov 2010, Haiyang Zhang wrote:
> -static int HvQueryHypervisorInfo(void)
> +static int hvquery_hypervisor_info(void)
> -static u64 HvDoHypercall(u64 control, void *input, void *output)
> +static u64 hvdo_hypercall(u64 control, void *input, void *output)
Should these be hv_do_hyperc
From: Jeremy Fitzhardinge
The inner loop of __ticket_spin_lock isn't doing anything very special,
so reimplement it in C.
For the 8 bit ticket lock variant, we use a register union to get direct
access to the lower and upper bytes in the tickets, but unfortunately gcc
won't generate a direct com
From: Jeremy Fitzhardinge
Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).
Ticket locks have a number of nice pro
From: Jeremy Fitzhardinge
Add a barrier() at the end of __raw_spin_unlock() to prevent instructions
from after the locked region from being reordered into it. In theory doing
so shouldn't cause any problems, but in practice, the system locks up
under load...
Signed-off-by: Jeremy Fitzhardinge
From: Jeremy Fitzhardinge
The code size expands somewhat, and its probably better to just call
a function rather than inline it.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/Kconfig |3 +++
kernel/Kconfig.locks |2 +-
2 files changed, 4 insertions(+), 1 deletions(-)
diff --git
From: Jeremy Fitzhardinge
Make sure the barrier in arch_spin_lock is definitely in the code path.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/includ
From: Jeremy Fitzhardinge
The unlock code is typically inlined throughout the kernel, so its useful
to make sure there's minimal register pressure overhead from the presence
of the unlock_tick pvop call.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/paravirt.h |2 +-
ar
From: Jeremy Fitzhardinge
Hi all,
This series does two major things:
1. It converts the bulk of the implementation to C, and makes the
"small ticket" and "large ticket" code common. Only the actual
size-dependent asm instructions are specific to the ticket size.
The resulting generate
From: Jeremy Fitzhardinge
Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path. To avoid this, convert it to using the pvops call
From: Jeremy Fitzhardinge
A few cleanups to the way spinlocks are defined and accessed:
- define __ticket_t which is the size of a spinlock ticket (ie, enough
bits to hold all the cpus)
- Define struct arch_spinlock as a union containing plain slock and
the head and tail tickets
- Use he
From: Jeremy Fitzhardinge
Make trylock code common regardless of ticket size.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 55 +---
arch/x86/include/asm/spinlock_types.h |3 ++
2 files changed, 19 insertions(+), 39 deletions(-
From: Jeremy Fitzhardinge
Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 35 +--
1 f
From: Jeremy Fitzhardinge
If we don't need to use a locked inc for unlock, then implement it in C.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 33 ++---
1 files changed, 18 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/
From: Jeremy Fitzhardinge
Make the bulk of __ticket_spin_lock look identical for large and small
number of cpus.
Signed-off-by: Jeremy Fitzhardinge
---
arch/x86/include/asm/spinlock.h | 23 ---
1 files changed, 8 insertions(+), 15 deletions(-)
diff --git a/arch/x86/inclu
On Wed, 2010-10-27 at 13:59 -0700, H. Peter Anvin wrote:
> I'll check it this evening when I'm at a working network again :(
Did this get applied? It seems to affect 2.6.32.x too
(http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=602273) so can we tag
it for stable as well?
Thanks,
Ian.
>
> "Jer
On Tue, Nov 02, 2010 at 09:06:56PM +, Haiyang Zhang wrote:
> > From: Brandon Philips [mailto:bran...@ifup.org]
> > Sent: Tuesday, November 02, 2010 1:04 PM
> > > -static int HvQueryHypervisorInfo(void)
> > > +static int hvquery_hypervisor_info(void)
> > > -static u64 HvDoHypercall(u64 control,
34 matches
Mail list logo