Michael S. Tsirkin wrote:
kvm needs data on MSI entries: that's the interface
current kernel exposes for injecting these interrupts.
I think we also need to support in-kernel devices which
would inject MSI interrupt directly from kernel.
For these, kvm would need to know when mask bit changes
an
On Thu, May 21, 2009 at 07:45:20PM +0300, Michael S. Tsirkin wrote:
> On Thu, May 21, 2009 at 02:31:26PM +0100, Paul Brook wrote:
> > On Thursday 21 May 2009, Paul Brook wrote:
> > > > > MSI provides multiple edge triggered interrupts, whereas traditional
> > > > > mode provides a single level trig
On Thu, May 21, 2009 at 03:50:18PM +0100, Paul Brook wrote:
> > >>> kvm has no business messing with the PCI device code.
> > >>
> > >> kvm has a fast path for irq injection. If qemu wants to support it we
> > >> need some abstraction here.
> > >
> > > Fast path from where to where? Having the PCI
On Thu, May 21, 2009 at 02:31:26PM +0100, Paul Brook wrote:
> On Thursday 21 May 2009, Paul Brook wrote:
> > > > MSI provides multiple edge triggered interrupts, whereas traditional
> > > > mode provides a single level triggered interrupt. My guess is most
> > > > devices will want to treat these d
Paul Brook wrote:
The fast path is an eventfd so that we don't have to teach all the
clients about the details of MSI. Userspace programs the MSI details
into kvm and hands the client an eventfd. All the client has to do is
bang on the eventfd for the interrupt to be queued. The eventfd
provid
On Thursday 21 May 2009, Avi Kivity wrote:
> Paul Brook wrote:
> >> kvm implements the APIC in the host kernel (qemu upstream doesn't
> >> support this yet). The fast path is wired to the in-kernel APIC, not
> >> the cpu core directly.
> >>
> >> The idea is to wire it to UIO for device assignment,
Paul Brook wrote:
kvm implements the APIC in the host kernel (qemu upstream doesn't
support this yet). The fast path is wired to the in-kernel APIC, not
the cpu core directly.
The idea is to wire it to UIO for device assignment, to a virtio-device
implemented in the kernel, and to qemu.
> >>> kvm has no business messing with the PCI device code.
> >>
> >> kvm has a fast path for irq injection. If qemu wants to support it we
> >> need some abstraction here.
> >
> > Fast path from where to where? Having the PCI layer bypass/re-implement
> > the APIC and inject the interrupt directl
On Thu, May 21, 2009 at 02:23:20PM +0100, Paul Brook wrote:
> > > MSI provides multiple edge triggered interrupts, whereas traditional mode
> > > provides a single level triggered interrupt. My guess is most devices
> > > will want to treat these differently anyway.
> >
> > So, is qemu_send_msi bet
Paul Brook wrote:
On Thursday 21 May 2009, Avi Kivity wrote:
Paul Brook wrote:
which is a trivial wrapper around stl_phys.
OK, but I'm adding another level of indirection in the middle,
to allow us to tie in a kvm backend.
kvm has no business messing with the PCI d
On Thursday 21 May 2009, Avi Kivity wrote:
> Paul Brook wrote:
> >>> which is a trivial wrapper around stl_phys.
> >>
> >> OK, but I'm adding another level of indirection in the middle,
> >> to allow us to tie in a kvm backend.
> >
> > kvm has no business messing with the PCI device code.
>
> kvm h
On Thu, May 21, 2009 at 02:53:14PM +0100, Paul Brook wrote:
> > > which is a trivial wrapper around stl_phys.
> >
> > OK, but I'm adding another level of indirection in the middle,
> > to allow us to tie in a kvm backend.
>
> kvm has no business messing with the PCI device code.
Yes it has :)
kv
Paul Brook wrote:
which is a trivial wrapper around stl_phys.
OK, but I'm adding another level of indirection in the middle,
to allow us to tie in a kvm backend.
kvm has no business messing with the PCI device code.
kvm has a fast path for irq injection. If qemu wants to supp
> > which is a trivial wrapper around stl_phys.
>
> OK, but I'm adding another level of indirection in the middle,
> to allow us to tie in a kvm backend.
kvm has no business messing with the PCI device code.
Paul
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a
On Thu, May 21, 2009 at 02:23:20PM +0100, Paul Brook wrote:
> > > MSI provides multiple edge triggered interrupts, whereas traditional mode
> > > provides a single level triggered interrupt. My guess is most devices
> > > will want to treat these differently anyway.
> >
> > So, is qemu_send_msi bet
On Thursday 21 May 2009, Paul Brook wrote:
> > > MSI provides multiple edge triggered interrupts, whereas traditional
> > > mode provides a single level triggered interrupt. My guess is most
> > > devices will want to treat these differently anyway.
> >
> > So, is qemu_send_msi better than qemu_set
> > MSI provides multiple edge triggered interrupts, whereas traditional mode
> > provides a single level triggered interrupt. My guess is most devices
> > will want to treat these differently anyway.
>
> So, is qemu_send_msi better than qemu_set_irq.
Neither. pci_send_msi, which is a trivial wrap
On Thu, May 21, 2009 at 02:09:32PM +0100, Paul Brook wrote:
> > > A tight coupling between PCI devices and the APIC is just going to cause
> > > us problems later one. I'm going to come back to the fact that these are
> > > memory writes so once we get IOMMU support they will presumably be
> > > su
On Thu, May 21, 2009 at 03:38:56PM +0300, Avi Kivity wrote:
> Paul Brook wrote:
>>> Instead of writing directly, let's abstract it behind a qemu_set_irq().
>>> This is easier for device authors. The default implementation of the
>>> irq callback could write to apic memory, while for kvm we can dir
On Thu, May 21, 2009 at 03:38:56PM +0300, Avi Kivity wrote:
> Paul Brook wrote:
>>> Instead of writing directly, let's abstract it behind a qemu_set_irq().
>>> This is easier for device authors. The default implementation of the
>>> irq callback could write to apic memory, while for kvm we can dir
On Thu, May 21, 2009 at 01:29:37PM +0100, Paul Brook wrote:
> On Thursday 21 May 2009, Avi Kivity wrote:
> > Paul Brook wrote:
> > In any case we need some internal API for this, and qemu_irq looks
> > like a good choice.
> > >>>
> > >>> What do you expect to be using this API?
> > >>
> >
> > A tight coupling between PCI devices and the APIC is just going to cause
> > us problems later one. I'm going to come back to the fact that these are
> > memory writes so once we get IOMMU support they will presumably be
> > subject to remapping by that, just like any other memory access.
>
> I
Paul Brook wrote:
Instead of writing directly, let's abstract it behind a qemu_set_irq().
This is easier for device authors. The default implementation of the
irq callback could write to apic memory, while for kvm we can directly
trigger the interrupt via the kvm APIs.
I'm still not convi
On Thursday 21 May 2009, Avi Kivity wrote:
> Paul Brook wrote:
> In any case we need some internal API for this, and qemu_irq looks
> like a good choice.
> >>>
> >>> What do you expect to be using this API?
> >>
> >> virtio, emulated devices capable of supporting MSI (e1000?), device
> >>
On Thu, May 21, 2009 at 03:08:18PM +0300, Avi Kivity wrote:
> Paul Brook wrote:
> In any case we need some internal API for this, and qemu_irq looks like
> a good choice.
>
What do you expect to be using this API?
>>> virtio, emulated devices capable of support
Paul Brook wrote:
In any case we need some internal API for this, and qemu_irq looks like
a good choice.
What do you expect to be using this API?
virtio, emulated devices capable of supporting MSI (e1000?), device
assignment (not yet in qemu.git).
It probably makes sense
> >> The PCI bus doesn't need any special support (I think) but something on
> >> the other end needs to interpret those writes.
> >
> > Sure. But there's definitely nothing PCI specific about it. I assumed
> > this would all be contained within the APIC.
>
> MSIs are defined by PCI and their confi
Paul Brook wrote:
The PCI bus doesn't need any special support (I think) but something on
the other end needs to interpret those writes.
Sure. But there's definitely nothing PCI specific about it. I assumed this
would all be contained within the APIC.
MSIs are defined by PCI and thei
On Thu, May 21, 2009 at 11:34:11AM +0100, Paul Brook wrote:
> > The PCI bus doesn't need any special support (I think) but something on
> > the other end needs to interpret those writes.
>
> Sure. But there's definitely nothing PCI specific about it. I assumed this
> would all be contained within
> The PCI bus doesn't need any special support (I think) but something on
> the other end needs to interpret those writes.
Sure. But there's definitely nothing PCI specific about it. I assumed this
would all be contained within the APIC.
> In any case we need some internal API for this, and qemu
Paul Brook wrote:
On Wednesday 20 May 2009, Michael S. Tsirkin wrote:
define api for allocating/setting up msi-x irqs, and for updating them
with msi-x vector information, supply implementation in ioapic. Please
comment on this API: I intend to port my msi-x patch to work on top of
it.
On Wednesday 20 May 2009, Michael S. Tsirkin wrote:
> define api for allocating/setting up msi-x irqs, and for updating them
> with msi-x vector information, supply implementation in ioapic. Please
> comment on this API: I intend to port my msi-x patch to work on top of
> it.
I though the point of
On Wed, May 20, 2009 at 11:44:57PM +0300, Blue Swirl wrote:
> On 5/20/09, Michael S. Tsirkin wrote:
> > On Wed, May 20, 2009 at 11:26:42PM +0300, Blue Swirl wrote:
> > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > On Wed, May 20, 2009 at 11:02:24PM +0300, Michael S. Tsirkin wrote:
> > > > >
On 5/20/09, Michael S. Tsirkin wrote:
> On Wed, May 20, 2009 at 11:26:42PM +0300, Blue Swirl wrote:
> > On 5/20/09, Michael S. Tsirkin wrote:
> > > On Wed, May 20, 2009 at 11:02:24PM +0300, Michael S. Tsirkin wrote:
> > > > On Wed, May 20, 2009 at 09:38:58PM +0300, Blue Swirl wrote:
> > > >
On Wed, May 20, 2009 at 11:26:42PM +0300, Blue Swirl wrote:
> On 5/20/09, Michael S. Tsirkin wrote:
> > On Wed, May 20, 2009 at 11:02:24PM +0300, Michael S. Tsirkin wrote:
> > > On Wed, May 20, 2009 at 09:38:58PM +0300, Blue Swirl wrote:
> > > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > >
On Wed, May 20, 2009 at 11:18:56PM +0300, Blue Swirl wrote:
> On 5/20/09, Michael S. Tsirkin wrote:
> > On Wed, May 20, 2009 at 09:38:58PM +0300, Blue Swirl wrote:
> > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > On Wed, May 20, 2009 at 08:44:31PM +0300, Blue Swirl wrote:
> > > > > On 5/20
On 5/20/09, Michael S. Tsirkin wrote:
> On Wed, May 20, 2009 at 11:02:24PM +0300, Michael S. Tsirkin wrote:
> > On Wed, May 20, 2009 at 09:38:58PM +0300, Blue Swirl wrote:
> > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > On Wed, May 20, 2009 at 08:44:31PM +0300, Blue Swirl wrote:
> > > >
On 5/20/09, Michael S. Tsirkin wrote:
> On Wed, May 20, 2009 at 09:38:58PM +0300, Blue Swirl wrote:
> > On 5/20/09, Michael S. Tsirkin wrote:
> > > On Wed, May 20, 2009 at 08:44:31PM +0300, Blue Swirl wrote:
> > > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > > On Wed, May 20, 2009 at 08
On Wed, May 20, 2009 at 11:02:24PM +0300, Michael S. Tsirkin wrote:
> On Wed, May 20, 2009 at 09:38:58PM +0300, Blue Swirl wrote:
> > On 5/20/09, Michael S. Tsirkin wrote:
> > > On Wed, May 20, 2009 at 08:44:31PM +0300, Blue Swirl wrote:
> > > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > > O
On Wed, May 20, 2009 at 09:38:58PM +0300, Blue Swirl wrote:
> On 5/20/09, Michael S. Tsirkin wrote:
> > On Wed, May 20, 2009 at 08:44:31PM +0300, Blue Swirl wrote:
> > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > On Wed, May 20, 2009 at 08:21:01PM +0300, Blue Swirl wrote:
> > > > > On 5/20
On 5/20/09, Michael S. Tsirkin wrote:
> On Wed, May 20, 2009 at 08:44:31PM +0300, Blue Swirl wrote:
> > On 5/20/09, Michael S. Tsirkin wrote:
> > > On Wed, May 20, 2009 at 08:21:01PM +0300, Blue Swirl wrote:
> > > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > > define api for allocating/
On Wed, May 20, 2009 at 08:44:31PM +0300, Blue Swirl wrote:
> On 5/20/09, Michael S. Tsirkin wrote:
> > On Wed, May 20, 2009 at 08:21:01PM +0300, Blue Swirl wrote:
> > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > define api for allocating/setting up msi-x irqs, and for updating them
> > > >
On Wed, May 20, 2009 at 08:44:31PM +0300, Blue Swirl wrote:
> On 5/20/09, Michael S. Tsirkin wrote:
> > On Wed, May 20, 2009 at 08:21:01PM +0300, Blue Swirl wrote:
> > > On 5/20/09, Michael S. Tsirkin wrote:
> > > > define api for allocating/setting up msi-x irqs, and for updating them
> > > >
On 5/20/09, Michael S. Tsirkin wrote:
> On Wed, May 20, 2009 at 08:21:01PM +0300, Blue Swirl wrote:
> > On 5/20/09, Michael S. Tsirkin wrote:
> > > define api for allocating/setting up msi-x irqs, and for updating them
> > > with msi-x vector information, supply implementation in ioapic. Plea
On 5/20/09, Avi Kivity wrote:
> Blue Swirl wrote:
>
> > Sparc64 also uses packets ("mondos", not implemented yet) for
> > interrupt vector data, there the packet size is 8 * 64 bits. I think
> > we should aim for a more generic API that covers this case also.
> >
> >
> >
>
> Is the packet structu
On Wed, May 20, 2009 at 08:21:01PM +0300, Blue Swirl wrote:
> On 5/20/09, Michael S. Tsirkin wrote:
> > define api for allocating/setting up msi-x irqs, and for updating them
> > with msi-x vector information, supply implementation in ioapic. Please
> > comment on this API: I intend to port my m
Blue Swirl wrote:
Sparc64 also uses packets ("mondos", not implemented yet) for
interrupt vector data, there the packet size is 8 * 64 bits. I think
we should aim for a more generic API that covers this case also.
Is the packet structure visible to software?
--
I have a truly marvellous pa
On 5/20/09, Michael S. Tsirkin wrote:
> define api for allocating/setting up msi-x irqs, and for updating them
> with msi-x vector information, supply implementation in ioapic. Please
> comment on this API: I intend to port my msi-x patch to work on top of
> it.
>
> Signed-off-by: Michael S. T
define api for allocating/setting up msi-x irqs, and for updating them
with msi-x vector information, supply implementation in ioapic. Please
comment on this API: I intend to port my msi-x patch to work on top of
it.
Signed-off-by: Michael S. Tsirkin
---
hw/apic.c |1 -
hw/ioapic.c |
49 matches
Mail list logo