On 10/09/2012 05:16 AM, Rusty Russell wrote:
> Anthony Liguori writes:
>> We'll never remove legacy so we shouldn't plan on it. There are
>> literally hundreds of thousands of VMs out there with the current virtio
>> drivers installed in them. We'll be supporting them for a very, very
>> long ti
On 09/04/2012 09:41 PM, Michael S. Tsirkin wrote:
> On Tue, Sep 04, 2012 at 07:34:19PM +0300, Avi Kivity wrote:
>> On 08/31/2012 12:56 PM, Michael S. Tsirkin wrote:
>> > On Fri, Aug 31, 2012 at 11:36:07AM +0200, Sasha Levin wrote:
>> >> On 08/30/2012 03:3
On 09/04/2012 07:34 PM, Avi Kivity wrote:
> On 08/31/2012 12:56 PM, Michael S. Tsirkin wrote:
>> On Fri, Aug 31, 2012 at 11:36:07AM +0200, Sasha Levin wrote:
>>> On 08/30/2012 03:38 PM, Michael S. Tsirkin wrote:
>>> >> +static unsigned int indirect_alloc_thresh =
On 08/31/2012 12:56 PM, Michael S. Tsirkin wrote:
> On Fri, Aug 31, 2012 at 11:36:07AM +0200, Sasha Levin wrote:
>> On 08/30/2012 03:38 PM, Michael S. Tsirkin wrote:
>> >> +static unsigned int indirect_alloc_thresh = 16;
>> > Why 16? Please make is MAX_SG + 1 this makes some sense.
>>
>> Wouldn't
On 06/26/2012 11:32 PM, Frank Swiderski wrote:
> This implementation of a virtio balloon driver uses the page cache to
> "store" pages that have been released to the host. The communication
> (outside of target counts) is one way--the guest notifies the host when
> it adds a page to the page cache
On 08/09/2012 06:13 PM, Paolo Bonzini wrote:
> Il 05/07/2012 12:29, Jason Wang ha scritto:
>> Sometimes, virtio device need to configure irq affiniry hint to maximize the
>> performance. Instead of just exposing the irq of a virtqueue, this patch
>> introduce an API to set the affinity for a virtqu
On 08/09/2012 12:55 PM, Amit Shah wrote:
> On (Thu) 09 Aug 2012 [18:24:58], Masami Hiramatsu wrote:
>> (2012/08/09 18:03), Amit Shah wrote:
>> > On (Tue) 24 Jul 2012 [11:37:18], Yoshihiro YUNOMAE wrote:
>> >> From: Masami Hiramatsu
>> >>
>> >> Add a failback memcpy path for unstealable pipe buffer
On 07/26/2012 11:15 PM, Nicholas A. Bellinger wrote:
>>
>
> Example..? If there is a better way to handle ioctl compat I'd
> certainly like to hear about it.
>
r = ioctl(fd, KVM_CHECK_EXTENSION, KVM_CAP_ASSIGN_DEV_IRQ);
if (r == -1)
...
if (r)
// ioctl(fd, KVM_ASSIGN_DEV_IRQ, ...) is
On 07/26/2012 05:34 AM, Nicholas A. Bellinger wrote:
>
> In that case, respinning a -v5 for tcm_vhost to start from ABI=0 and
> will post an updated patch shortly.
>
>> The main thing I would like to confirm is that this only versions the
>> tcm_vhost ioctls? In that case a single version number
On 07/24/2012 11:45 PM, Nicholas A. Bellinger wrote:
>> > diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h
>> > index e942df9..3d5378f 100644
>> > --- a/drivers/vhost/tcm_vhost.h
>> > +++ b/drivers/vhost/tcm_vhost.h
>> > @@ -80,7 +80,17 @@ struct tcm_vhost_tport {
>> >
>> > #i
On 05/08/2012 02:15 AM, Jeremy Fitzhardinge wrote:
> On 05/07/2012 06:49 AM, Avi Kivity wrote:
> > On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
> >> * Raghavendra K T [2012-05-07
> >> 19:08:51]:
> >>
> >>> I 'll get hold of a PLE mc and
On 05/07/2012 05:47 PM, Raghavendra K T wrote:
>> Not good. Solving a problem in software that is already solved by
>> hardware? It's okay if there are no costs involved, but here we're
>> introducing a new ABI that we'll have to maintain for a long time.
>>
>
>
> Hmm agree that being a step ahea
On 05/07/2012 05:52 PM, Avi Kivity wrote:
> > Having said that, it is hard for me to resist saying :
> > bottleneck is somewhere else on PLE m/c and IMHO answer would be
> > combination of paravirt-spinlock + pv-flush-tb.
> >
> > But I need to come up with good nu
On 05/07/2012 04:53 PM, Raghavendra K T wrote:
>> Is the improvement so low, because PLE is interfering with the patch, or
>> because PLE already does a good job?
>>
>
>
> It is because PLE already does a good job (of not burning cpu). The
> 1-3% improvement is because, patchset knows atleast who i
On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
> * Raghavendra K T [2012-05-07 19:08:51]:
>
> > I 'll get hold of a PLE mc and come up with the numbers soon. but I
> > 'll expect the improvement around 1-3% as it was in last version.
>
> Deferring preemption (when vcpu is holding lock) may giv
On 05/07/2012 04:20 PM, Raghavendra K T wrote:
> On 05/07/2012 05:36 PM, Avi Kivity wrote:
>> On 05/07/2012 01:58 PM, Raghavendra K T wrote:
>>> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>>>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>>>> This
On 05/07/2012 01:58 PM, Raghavendra K T wrote:
> On 05/07/2012 02:02 PM, Avi Kivity wrote:
>> On 05/07/2012 11:29 AM, Ingo Molnar wrote:
>>> This is looking pretty good and complete now - any objections
>>> from anyone to trying this out in a separate x86 topic tree?
&
On 05/07/2012 11:29 AM, Ingo Molnar wrote:
> This is looking pretty good and complete now - any objections
> from anyone to trying this out in a separate x86 topic tree?
No objections, instead an
Acked-by: Avi Kivity
--
error compiling committee.c: too many arguments to fu
On 04/29/2012 04:52 PM, Gleb Natapov wrote:
> On Sun, Apr 29, 2012 at 04:26:21PM +0300, Avi Kivity wrote:
> > On 04/29/2012 04:20 PM, Gleb Natapov wrote:
> > > > > This is too similar to kvm_irq_delivery_to_apic(). Why not reuse it.
> > > > > We
> > &g
On 04/30/2012 10:44 AM, Raghavendra K T wrote:
>> Hm, what about reusing KVM_REQ_UNHALT?
>>
>
>
> Yes, I had experimented this for some time without success.
> For e.g. having
> make_request(KVM_REQ_UNHALT, vcpu) directly from kick hypercall.
>
> It would still need a flag. (did not get any alterna
On 04/23/2012 01:00 PM, Raghavendra K T wrote:
> From: Raghavendra K T
>
> Signed-off-by: Raghavendra K T
> ---
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 6faa550..7354c1b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5691,7 +5691,9 @@ int kvm_arch_vcpu_io
On 04/29/2012 04:20 PM, Gleb Natapov wrote:
> > > This is too similar to kvm_irq_delivery_to_apic(). Why not reuse it. We
> > > can use one of reserved delivery modes as PV delivery mode. We will
> > > disallow guest to trigger it through apic interface, so this will not be
> > > part of ABI and ca
On 04/23/2012 12:59 PM, Raghavendra K T wrote:
> From: Srivatsa Vaddagiri
>
> KVM_HC_KICK_CPU allows the calling vcpu to kick another vcpu out of halt
> state.
>
> The presence of these hypercalls is indicated to guest via
> KVM_FEATURE_PV_UNHALT/KVM_CAP_PV_UNHALT.
>
> #endif
> diff --git a
On 04/24/2012 12:59 PM, Gleb Natapov wrote:
> >
> > +/*
> > + * kvm_pv_kick_cpu_op: Kick a vcpu.
> > + *
> > + * @apicid - apicid of vcpu to be kicked.
> > + */
> > +static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid)
> > +{
> > + struct kvm_vcpu *vcpu = NULL;
> > + int i;
> > +
> >
On 04/10/2012 10:28 AM, Ren Mingxin wrote:
> The current virtio block's naming algorithm just supports 18278
> (26^3 + 26^2 + 26) disks. If there are mass of virtio blocks,
> there will be disks with the same name.
>
> Based on commit 3e1a7ff8a0a7b948f2684930166954f9e8e776fe, I add
> function "virt
On 04/02/2012 12:26 PM, Thomas Gleixner wrote:
> > One thing about it is that it can give many false positives. Consider a
> > fine-grained spinlock that is being accessed by many threads. That is,
> > the lock is taken and released with high frequency, but there is no
> > contention, because eac
On 04/02/2012 12:51 PM, Raghavendra K T wrote:
> On 04/01/2012 07:23 PM, Avi Kivity wrote:
> > On 04/01/2012 04:48 PM, Raghavendra K T wrote:
> >>>> I have patch something like below in mind to try:
> >>>>
> >>>> diff --git a/virt/kvm/kvm_main
On 04/01/2012 04:48 PM, Raghavendra K T wrote:
>>> I have patch something like below in mind to try:
>>>
>>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>>> index d3b98b1..5127668 100644
>>> --- a/virt/kvm/kvm_main.c
>>> +++ b/virt/kvm/kvm_main.c
>>> @@ -1608,15 +1608,18 @@ void kvm_vcpu
On 03/31/2012 01:07 AM, Thomas Gleixner wrote:
> On Fri, 30 Mar 2012, H. Peter Anvin wrote:
>
> > What is the current status of this patchset? I haven't looked at it too
> > closely because I have been focused on 3.4 up until now...
>
> The real question is whether these heuristics are the correct
On 03/30/2012 01:07 PM, Raghavendra K T wrote:
> On 03/29/2012 11:33 PM, Raghavendra K T wrote:
>> On 03/29/2012 03:28 PM, Avi Kivity wrote:
>>> On 03/28/2012 08:21 PM, Raghavendra K T wrote:
>
>> I really like below ideas. Thanks for that!.
>>
>>> - from
On 03/28/2012 08:21 PM, Raghavendra K T wrote:
>
>>
>>
>> Looks like a good baseline on which to build the KVM
>> implementation. We
>> might need some handshake to prevent interference on the host
>> side with
>> the PLE code.
>>
>
> I think I still missed
On 03/21/2012 12:20 PM, Raghavendra K T wrote:
> From: Jeremy Fitzhardinge
>
> Changes since last posting: (Raghavendra K T)
> [
> - Rebased to linux-3.3-rc6.
> - used function+enum in place of macro (better type checking)
> - use cmpxchg while resetting zero status for possible race
> [
On 03/19/2012 08:57 PM, Michael S. Tsirkin wrote:
> >
> > Should be done via an extra BAR (with the same layout, perhaps extended)
> > so compatibility is preserved.
>
> No, that would need guest changes to be of use. The point of this hack
> is to make things work for Linux guests where PIO does
On 03/19/2012 05:56 PM, Michael S. Tsirkin wrote:
> Currently virtio-pci is specified so that configuration of the device is
> done through a PCI IO space (via BAR 0 of the virtual PCI device).
> However, Linux guests happen to use ioread/iowrite/iomap primitives
> for access, and these work unifor
On 01/18/2012 11:52 PM, Jeremy Fitzhardinge wrote:
> On 01/19/2012 12:54 AM, Srivatsa Vaddagiri wrote:
> >
> >> That logic relies on the "kick" being level triggered, so that "kick"
> >> before "block" will cause the block to fall out immediately. If you're
> >> using "hlt" as the block and it has
On 01/16/2012 04:13 PM, Raghavendra K T wrote:
>> Please drop all of these and replace with tracepoints in the appropriate
>> spots. Everything else (including the historgram) can be reconstructed
>> the tracepoints in userspace.
>>
>
>
> I think Jeremy pointed that tracepoints use spinlocks and h
On 01/16/2012 03:43 PM, Raghavendra K T wrote:
>>> Dbench:
>>> Throughput is in MB/sec
>>> NRCLIENTS BASEBASE+patch
>>> %improvement
>>> mean (sd) mean (sd)
>>> 8 1.101190 (0.875082) 1.700395 (0.846809) 54.4143
On 01/11/2012 06:54 AM, Stephen Hemminger wrote:
> By adding the a module alias, programs (or users) won't have to explicitly
> call modprobe. Vhost-net will always be available if built into the kernel.
> It does require assigning a permanent minor number for depmod to work.
> Choose one next to T
On 01/16/2012 11:40 AM, Srivatsa Vaddagiri wrote:
> * Avi Kivity [2012-01-16 11:00:41]:
>
> > Wait, what happens with yield_on_hlt=0? Will the hypercall work as
> > advertised?
>
> Hmm ..I don't think it will work when yield_on_hlt=0.
>
> One option is to mak
On 01/14/2012 08:26 PM, Raghavendra K T wrote:
> Extends Linux guest running on KVM hypervisor to support pv-ticketlocks.
>
> During smp_boot_cpus paravirtualied KVM guest detects if the hypervisor has
> required feature (KVM_FEATURE_PVLOCK_KICK) to support pv-ticketlocks. If so,
> support for p
On 01/14/2012 08:25 PM, Raghavendra K T wrote:
> Add a hypercall to KVM hypervisor to support pv-ticketlocks
>
> KVM_HC_KICK_CPU allows the calling vcpu to kick another vcpu out of halt
> state.
>
> The presence of these hypercalls is indicated to guest via
> KVM_FEATURE_PVLOCK_KICK/KVM_CAP_
On 01/14/2012 08:27 PM, Raghavendra K T wrote:
> +
> +5. KVM_HC_KICK_CPU
> +
> +value: 5
> +Architecture: x86
> +Purpose: Hypercall used to wakeup a vcpu from HLT state
> +
> +Usage example : A vcpu of a paravirtualized guest that is busywaiting in
> guest
> +kernel mode fo
On 01/16/2012 08:40 AM, Jeremy Fitzhardinge wrote:
> >
> > That means we're spinning for n cycles, then notify the spinlock holder
> > that we'd like to get kicked and go sleeping. While I'm pretty sure that it
> > improves the situation, it doesn't solve all of the issues we have.
> >
> > Imag
On 01/16/2012 06:00 AM, Alexander Graf wrote:
> On 16.01.2012, at 04:51, Srivatsa Vaddagiri wrote:
>
> > * Alexander Graf [2012-01-16 04:23:24]:
> >
> >>> +5. KVM_HC_KICK_CPU
> >>> +
> >>> +value: 5
> >>> +Architecture: x86
> >>> +Purpose: Hypercall used to wakeup a vcpu f
On 12/08/2011 05:37 PM, Sasha Levin wrote:
> On Thu, 2011-12-08 at 20:52 +1030, Rusty Russell wrote:
> > Here's the patch series I ended up with. I haven't coded up the QEMU
> > side yet, so no idea if the new driver works.
> >
> > Questions:
> > (1) Do we win from separating ISR, NOTIFY and COMM
On 12/07/2011 06:46 PM, Raghavendra K T wrote:
> On 12/07/2011 08:22 PM, Avi Kivity wrote:
>> On 12/07/2011 03:39 PM, Marcelo Tosatti wrote:
>>>> Also I think we can keep the kicked flag in vcpu->requests, no need
>>>> for
>>>> new storage.
>
On 12/07/2011 03:39 PM, Marcelo Tosatti wrote:
> > Also I think we can keep the kicked flag in vcpu->requests, no need for
> > new storage.
>
> Was going to suggest it but it violates the currently organized
> processing of entries at the beginning of vcpu_enter_guest.
>
> That is, this "kicked" fl
On 12/06/2011 02:03 PM, Rusty Russell wrote:
> On Tue, 06 Dec 2011 11:58:21 +0200, Avi Kivity wrote:
> > On 12/06/2011 07:07 AM, Rusty Russell wrote:
> > > Yes, but the hypervisor/trusted party would simply have to do the copy;
> > > the rings themselves would be share
On 12/07/2011 02:33 PM, Marcelo Tosatti wrote:
> >
> > Also Avi pointed that, logically kvm_arch_vcpu_ioctl_set_mpstate should
> > be called only in vcpu thread, so after further debugging, I noticed
> > that, setting vcpuN->mp_state = KVM_MP_STATE_RUNNABLE; is not
> > necessary.
> > I 'll remove
On 12/06/2011 06:49 PM, Konrad Rzeszutek Wilk wrote:
> On Sun, Dec 04, 2011 at 11:36:58PM +0530, Raghavendra K T wrote:
> > On 12/02/2011 01:20 AM, Raghavendra K T wrote:
> > >>Have you tested it on AMD machines? There are some differences in the
> > >>hypercall infrastructure there.
> > >
> > >Yes
On 12/06/2011 07:07 AM, Rusty Russell wrote:
> On Mon, 05 Dec 2011 11:52:54 +0200, Avi Kivity wrote:
> > On 12/05/2011 02:10 AM, Rusty Russell wrote:
> > > On Sun, 04 Dec 2011 17:16:59 +0200, Avi Kivity wrote:
> > > > On 12/04/2011 05:11 PM, Michael S. Tsirkin wrote
On 12/05/2011 02:10 AM, Rusty Russell wrote:
> On Sun, 04 Dec 2011 17:16:59 +0200, Avi Kivity wrote:
> > On 12/04/2011 05:11 PM, Michael S. Tsirkin wrote:
> > > > There's also the used ring, but that's a
> > > > mistake if you have out of order comple
On 12/04/2011 07:34 PM, Sasha Levin wrote:
> >
> > I'm confused. didn't you see a bigger benefit for guest->host by
> > switching indirect off?
>
> The 5% improvement is over the 'regular' indirect on, not over indirect
> off. Sorry for the confusion there.
>
> I suggested this change regardless o
On 12/04/2011 06:00 PM, Michael S. Tsirkin wrote:
> > If you
> > copy descriptors, then it goes away.
>
> The avail ring could go away. used could if we make descriptors
> writeable. IIUC it was made RO in the hope that will make it
> easier for xen to adopt. Still relevant?
You mean RO from the
On 12/04/2011 05:11 PM, Michael S. Tsirkin wrote:
> > There's also the used ring, but that's a
> > mistake if you have out of order completion. We should have used copying.
>
> Seems unrelated... unless you want used to be written into
> descriptor ring itself?
The avail/used rings are in additio
On 12/04/2011 02:01 PM, Michael S. Tsirkin wrote:
> >
> > How much better?
> >
> > I think that if indirects benefit networking, then we're doing something
> > wrong. What's going on? Does the ring get filled too early? If so we
> > should expand it.
>
> The ring is physically contigious.
> Wi
On 12/03/2011 01:50 PM, Sasha Levin wrote:
> On Fri, 2011-12-02 at 11:16 +1030, Rusty Russell wrote:
> > On Thu, 1 Dec 2011 12:26:42 +0200, "Michael S. Tsirkin"
> > wrote:
> > > On Thu, Dec 01, 2011 at 10:09:37AM +0200, Sasha Levin wrote:
> > > > On Thu, 2011-12-01 at 09:58 +0200, Michael S. Tsir
On 11/30/2011 10:59 AM, Raghavendra K T wrote:
> Add a hypercall to KVM hypervisor to support pv-ticketlocks
>
> KVM_HC_KICK_CPU allows the calling vcpu to kick another vcpu out of halt
> state.
>
> The presence of these hypercalls is indicated to guest via
> KVM_FEATURE_KICK_VCPU/KVM_CAP_KI
On 11/29/2011 04:54 PM, Michael S. Tsirkin wrote:
> >
> > Which is actually strange, weren't indirect buffers introduced to make
> > the performance *better*? From what I see it's pretty much the
> > same/worse for virtio-blk.
>
> I know they were introduced to allow adding very large bufs.
> See
On 11/15/2011 08:13 PM, Sasha Levin wrote:
> > >
> > > Hmm... If thats the plan, it should probably be a virtio thing (not
> > > virtio-mmio specific).
> > >
> > > Either way, it could also use some clarification in the spec.
> >
> > The spec only covers virtio-pci; this virtio-mmio is completel
On 11/15/2011 07:56 PM, Sasha Levin wrote:
> >
> > This isn't a PCI device, so does it make sense to use a PCI vendor
> > ID here? The kernel doesn't check the vendor ID at the moment,
> > but presumably the idea of the field is to allow the kernel to
> > work around implementation bugs/blacklist/
On 11/09/2011 10:46 AM, Sasha Levin wrote:
> > Alternatively we can add new structures with new
> > structure IDs, pointed to from PCI configuration space.
> >
> > As we don't yet have devices or drivers with 64 bit features,
> > I decided we don't need high feature bits in legacy space.
> > This
On 11/08/2011 11:41 PM, Michael S. Tsirkin wrote:
> > PDF will follow.
>
> Attached for the lyx challenged :)
>
>
The diagrams are truncated.
Otherwise looks reasonable.
--
error compiling committee.c: too many arguments to function
___
Virtualizatio
On 11/03/2011 04:37 PM, Anthony Liguori wrote:
>
>> 2. Proposed spec patch, kernel change, qemu change
>> 3. Buy-ins from spec maintainer, kernel driver maintainer, qemu
>> device
>> maintainer (only regarding the ABI, not the code)
>
> I don't think this is how it's working
On 11/03/2011 02:11 PM, Michael S. Tsirkin wrote:
> On Thu, Nov 03, 2011 at 12:37:04PM +0200, Avi Kivity wrote:
> > On 11/03/2011 01:31 AM, Michael S. Tsirkin wrote:
> > > Add a flexible mechanism to specify virtio configuration layout, using
> > > pci vendor-speci
On 11/03/2011 01:31 AM, Michael S. Tsirkin wrote:
> Add a flexible mechanism to specify virtio configuration layout, using
> pci vendor-specific capability. A separate capability is used for each
> of common, device specific and data-path accesses.
>
>
How about posting the spec change instead of
On 10/26/2011 09:08 PM, Raghavendra K T wrote:
> On 10/26/2011 04:04 PM, Avi Kivity wrote:
>> On 10/25/2011 08:24 PM, Raghavendra K T wrote:
> CCing Ryan also
>>>
>>> So then do also you foresee the need for directed yield at some point,
>>> to address LHP
On 08/18/2011 10:59 AM, Sasha Levin wrote:
> >
> > This is something that can be used by very few people, but everyone has
> > to carry it. It's more efficient to add statistics support to your
> > automation framework (involving the guest).
> >
>
> That was just one example of many possibiliti
On 08/18/2011 09:29 AM, Sasha Levin wrote:
> >
> > What can you do with it?
> >
>
> I was actually planning on submitting another patch that would add
> something similar into virtio-net. My plan was to enable collecting
> statistics regarding memory, network and disk usage in a simple manner
> wi
On 08/17/2011 09:38 PM, Sasha Levin wrote:
> On Wed, 2011-08-17 at 16:00 -0700, Avi Kivity wrote:
> > On 08/16/2011 12:47 PM, Sasha Levin wrote:
> > > This patch adds support for an optional stats vq that works similary to
> > the
> > > stats vq provided by v
On 08/16/2011 12:47 PM, Sasha Levin wrote:
> This patch adds support for an optional stats vq that works similary to the
> stats vq provided by virtio-balloon.
>
> The purpose of this change is to allow collection of statistics about working
> virtio-blk devices to easily analyze performance withou
On 02/07/2011 12:28 PM, Ravi Kumar Kulkarni wrote:
> On Mon, Feb 7, 2011 at 3:24 PM, Avi Kivity wrote:
> > On 02/07/2011 11:47 AM, Ravi Kumar Kulkarni wrote:
> >>
> >> >
> >> >That is not the same address. And the code you post
On 02/07/2011 11:47 AM, Ravi Kumar Kulkarni wrote:
> >
> > That is not the same address. And the code you posted doesn't make any
> > sense.
> >
> sorry for the mistake. here's the correct one
>
>
> (qemu) xp /20iw 0x1e2f3f7b
>0x1e2f3f7b: (bad)
>
On 02/07/2011 11:24 AM, Ravi Kumar Kulkarni wrote:
> On Mon, Feb 7, 2011 at 2:19 PM, Avi Kivity wrote:
> > On 02/07/2011 10:33 AM, Ravi Kumar Kulkarni wrote:
> >>
> >> On Sun, Feb 6, 2011 at 10:50 PM, Avi Kivitywrote:
> >>>
> >>
On 02/07/2011 10:33 AM, Ravi Kumar Kulkarni wrote:
> On Sun, Feb 6, 2011 at 10:50 PM, Avi Kivity wrote:
>> > On 02/04/2011 03:58 PM, Jan Kiszka wrote:
>>> >>
>>>> >> > when i run my kernel image with qemu-kvm it gives emulation error
>>
On 02/04/2011 03:58 PM, Jan Kiszka wrote:
> > when i run my kernel image with qemu-kvm it gives emulation error failure
> > trying to execute the code outside ROM or RAM at fec0(IO APIC base
> > address)
> > but the same code runs fine with qemu. can anyone please point me
> > where might
On 11/17/2010 11:05 AM, Jeremy Fitzhardinge wrote:
> On 11/17/2010 12:58 AM, Avi Kivity wrote:
> >> Actually in this case I'm pretty sure there's already a "set bit"
> >> function which will do the job. set_bit(), I guess, though it ta
On 11/17/2010 10:52 AM, Jeremy Fitzhardinge wrote:
> On 11/17/2010 12:31 AM, Jan Beulich wrote:
> On 16.11.10 at 22:08, Jeremy Fitzhardinge wrote:
> >> +static inline void __ticket_enter_slowpath(struct arch_spinlock *lock)
> >> +{
> >> + if (sizeof(lock->tickets.tail) == sizeof(u8))
> >>
On 11/16/2010 11:08 PM, Jeremy Fitzhardinge wrote:
> From: Jeremy Fitzhardinge
>
> Hi all,
>
> This is a revised version of the pvticket lock series.
>
> The early part of the series is mostly unchanged: it converts the bulk
> of the ticket lock code into C and makes the "small" and "large"
> ticke
On 10/28/2010 03:52 PM, Ian Molton wrote:
> On 28/10/10 15:24, Avi Kivity wrote:
>>> The caller is intended to block as the host must perform GL rendering
>>> before allowing the guests process to continue.
>>
>> Why is that? Can't we pipeline the process?
On 10/28/2010 01:54 PM, Ian Molton wrote:
>> Well, I like to review an implementation against a spec.
>
>
> True, but then all that would prove is that I can write a spec to
> match the code.
It would also allow us to check that the spec matches the requirements.
Those two steps are easier th
On 10/27/2010 03:00 PM, Ian Molton wrote:
> On 19/10/10 11:39, Avi Kivity wrote:
>> On 10/19/2010 12:31 PM, Ian Molton wrote:
>
>>>> 2. should start with a patch to the virtio-pci spec to document what
>>>> you're doing
>>>
>>> Where
On 09/13/2010 09:41 AM, Martin Schwidefsky wrote:
>>> Actually Marcelo applied it. But the natural place for it is Rusty's
>>> virtio tree. Rusty, if you want to take it, let me know and I'll drop
>>> it from kvm.git.
>> I thought it would be in the s390 tree, which is why I didn't take it...
>
On 09/12/2010 02:42 AM, Alexander Graf wrote:
> On 24.08.2010, at 15:48, Alexander Graf wrote:
>
>> The one big missing feature in s390-virtio was hotplugging. This is no more.
>> This patch implements hotplug add support, so you can on the fly add new
>> devices
>> in the guest.
>>
>> Keep in m
On 08/24/2010 03:32 PM, Alexander Graf wrote:
>
>> Perhaps we should freeze virtio/s390 development until someone feels
>> sufficiently motivated.
> Sure, go ahead. I don't think that'll help anyone but if it makes you
> feel good...
I don't maintain virtio or the virtio-s390 interface, so I can
On 08/24/2010 03:25 PM, Alexander Graf wrote:
> Avi Kivity wrote:
>> On 08/24/2010 03:14 PM, Christian Borntraeger wrote:
>>> I have no strong opinion on that, but I think its more a matter of where
>>> to put an interface description. A header file seems just the
On 08/24/2010 03:14 PM, Christian Borntraeger wrote:
>
> I have no strong opinion on that, but I think its more a matter of where
> to put an interface description. A header file seems just the right place.
> I will let you (or Rusty) decide.
First of all we need a virtio/s390 specification, lik
On 06/29/2010 10:08 AM, Stefan Hajnoczi wrote:
>
> Is it incorrect to have the following pattern?
> spin_lock_irqsave(q->queue_lock);
> spin_unlock(q->queue_lock);
> spin_lock(q->queue_lock);
> spin_unlock_irqrestore(q->queue_lock);
>
Perfectly legitimate. spin_lock_irqsave() is equivalent to
On 06/23/2010 06:43 PM, Michael S. Tsirkin wrote:
>
>
>>> If/when we use more registers, we can update driver to clear them on start.
>>>
>>>
>> The kdump kernel may not load drivers for those extra devices.
>>
> Then we don't care about clearing them?
>
We do, if the device
On 06/23/2010 06:26 PM, Michael S. Tsirkin wrote:
>
>
>>>
>>>
>> Shouldn't a reset be equivalent to power cycling?
>>
>>
>>
> If we did this, driver would need to restore registers
> such as BAR etc.
>
>
>
We could save/rest
On 06/23/2010 05:43 PM, Michael S. Tsirkin wrote:
>
>
>> If we don't already do so, we
>> should probably FLR anything that moves when a kexec kernel starts.
>>
> Probably only whatever we want to use. But whether this will make it
> more, or less robust, is an open question.
>
I'm thi
On 06/23/2010 04:59 PM, Michael S. Tsirkin wrote:
>
>> Why doesn't a device reset result in msi being cleared?
>>
> This is not a standard function reset. This is virtio specific
> command. So it only clears virtio registers.
>
I see. We should implement FLR in qemu. If we don't alread
On 06/10/2010 06:22 PM, Michael S. Tsirkin wrote:
> virtio-pci resets the device at startup by writing to the status
> register, but this does not clear the pci config space,
> specifically msi enable status which affects register
> layout.
>
> This breaks things like kdump when they try to use e.g
On 05/26/2010 10:50 PM, Michael S. Tsirkin wrote:
> Here's a rewrite of the original patch with a new layout.
> I haven't tested it yet so no idea how this performs, but
> I think this addresses the cache bounce issue raised by Avi.
> Posting for early flames/comments.
>
> Generally, the Host end o
On 05/24/2010 11:05 AM, Michael S. Tsirkin wrote:
>
> Okay, but why is lockunshare faster than unshare?
>
>
No idea.
--
Do not meddle in the internals of kernels, for they are subtle and quick to
panic.
___
Virtualization mailing list
Virtualizat
On 05/23/2010 07:30 PM, Michael S. Tsirkin wrote:
>
>
>>> Maybe we should use atomics on index then?
>>>
>>>
>> This should only be helpful if you access the cacheline several times in
>> a row. That's not the case in virtio (or here).
>>
> So why does it help?
>
We actually
On 05/23/2010 06:51 PM, Michael S. Tsirkin wrote:
>>
>>> So locked version seems to be faster than unlocked,
>>> and share/unshare not to matter?
>>>
>>>
>> May be due to the processor using the LOCK operation as a hint to
>> reserve the cacheline for a bit.
>>
> Maybe we should use a
On 05/23/2010 06:31 PM, Michael S. Tsirkin wrote:
> On Thu, May 20, 2010 at 02:38:16PM +0930, Rusty Russell wrote:
>
>> On Thu, 20 May 2010 02:31:50 pm Rusty Russell wrote:
>>
>>> On Wed, 19 May 2010 05:36:42 pm Avi Kivity wrote:
>>>
>>
On 05/20/2010 05:34 PM, Rusty Russell wrote:
>
>> Have just one ring, no indexes. The producer places descriptors into
>> the ring and updates the head, The consumer copies out descriptors to
>> be processed and copies back in completed descriptors. Chaining is
>> always linear. The descriptors
On 05/20/2010 08:01 AM, Rusty Russell wrote:
>
>> A device with out of order
>> completion (like virtio-blk) will quickly randomize the unused
>> descriptor indexes, so every descriptor fetch will require a bounce.
>>
>> In contrast, if the rings hold the descriptors themselves instead of
>> pointe
1 - 100 of 437 matches
Mail list logo