On Mon, Nov 16 2020 at 23:51, Kevin Tian wrote:
>> From: Jason Gunthorpe
> btw Jason/Thomas, how do you think about the proposal down in this
> thread (ims=[auto|on|off])? Does it sound a good tradeoff to move forward?
What does it solve? It defaults to auto and then you still need to solve
the p
> From: Jason Gunthorpe
> Sent: Tuesday, November 17, 2020 2:03 AM
>
> On Mon, Nov 16, 2020 at 06:56:33PM +0100, Thomas Gleixner wrote:
> > On Mon, Nov 16 2020 at 11:46, Jason Gunthorpe wrote:
> >
> > > On Mon, Nov 16, 2020 at 07:31:49AM +, Tian, Kevin wrote:
> > >
> > >> > The subdevices req
On Mon, Nov 16 2020 at 14:02, Jason Gunthorpe wrote:
> On Mon, Nov 16, 2020 at 06:56:33PM +0100, Thomas Gleixner wrote:
>> On Mon, Nov 16 2020 at 11:46, Jason Gunthorpe wrote:
>>
>> > On Mon, Nov 16, 2020 at 07:31:49AM +, Tian, Kevin wrote:
>> >
>> >> > The subdevices require PASID & IOMMU in
On Mon, Nov 16, 2020 at 06:56:33PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 16 2020 at 11:46, Jason Gunthorpe wrote:
>
> > On Mon, Nov 16, 2020 at 07:31:49AM +, Tian, Kevin wrote:
> >
> >> > The subdevices require PASID & IOMMU in native, but inside the guest
> >> > there
> >> > is no
> >>
On Mon, Nov 16 2020 at 11:46, Jason Gunthorpe wrote:
> On Mon, Nov 16, 2020 at 07:31:49AM +, Tian, Kevin wrote:
>
>> > The subdevices require PASID & IOMMU in native, but inside the guest there
>> > is no
>> > need for IOMMU unless you want to build SVM on top. subdevices work
>> > without
>>
On Mon, Nov 16, 2020 at 07:31:49AM +, Tian, Kevin wrote:
> > The subdevices require PASID & IOMMU in native, but inside the guest there
> > is no
> > need for IOMMU unless you want to build SVM on top. subdevices work
> > without
> > any vIOMMU or hypercall in the guest. Only because they look
On Sat, Nov 14, 2020 at 01:18:37PM -0800, Raj, Ashok wrote:
> On Sat, Nov 14, 2020 at 10:34:30AM +, Christoph Hellwig wrote:
> > On Thu, Nov 12, 2020 at 11:42:46PM +0100, Thomas Gleixner wrote:
> > > DMI vendor name is pretty good final check when the bit is 0. The
> > > strings I'm aware of ar
> From: Raj, Ashok
> Sent: Monday, November 16, 2020 8:23 AM
>
> On Sun, Nov 15, 2020 at 11:11:27PM +0100, Thomas Gleixner wrote:
> > On Sun, Nov 15 2020 at 11:31, Ashok Raj wrote:
> > > On Sun, Nov 15, 2020 at 12:26:22PM +0100, Thomas Gleixner wrote:
> > >> > opt-in by device or kernel? The way
On Sun, Nov 15, 2020 at 11:11:27PM +0100, Thomas Gleixner wrote:
> On Sun, Nov 15 2020 at 11:31, Ashok Raj wrote:
> > On Sun, Nov 15, 2020 at 12:26:22PM +0100, Thomas Gleixner wrote:
> >> > opt-in by device or kernel? The way we are planning to support this is:
> >> >
> >> > Device support for IMS
On Sun, Nov 15 2020 at 11:31, Ashok Raj wrote:
> On Sun, Nov 15, 2020 at 12:26:22PM +0100, Thomas Gleixner wrote:
>> > opt-in by device or kernel? The way we are planning to support this is:
>> >
>> > Device support for IMS - Can discover in device specific means
>> > Kernel support for IMS. - Supp
On Sun, Nov 15, 2020 at 12:26:22PM +0100, Thomas Gleixner wrote:
> On Sat, Nov 14 2020 at 13:18, Ashok Raj wrote:
> > On Sat, Nov 14, 2020 at 10:34:30AM +, Christoph Hellwig wrote:
> >> On Thu, Nov 12, 2020 at 11:42:46PM +0100, Thomas Gleixner wrote:
> >> Which is why I really think we need exp
On Sat, Nov 14 2020 at 13:18, Ashok Raj wrote:
> On Sat, Nov 14, 2020 at 10:34:30AM +, Christoph Hellwig wrote:
>> On Thu, Nov 12, 2020 at 11:42:46PM +0100, Thomas Gleixner wrote:
>> Which is why I really think we need explicit opt-ins for "native"
>> SIOV handling and for paravirtualized SIOV
On Sat, Nov 14, 2020 at 10:34:30AM +, Christoph Hellwig wrote:
> On Thu, Nov 12, 2020 at 11:42:46PM +0100, Thomas Gleixner wrote:
> > DMI vendor name is pretty good final check when the bit is 0. The
> > strings I'm aware of are:
> >
> > QEMU, Bochs, KVM, Xen, VMware, VMW, VMware Inc., innotek
On Thu, Nov 12, 2020 at 11:42:46PM +0100, Thomas Gleixner wrote:
> DMI vendor name is pretty good final check when the bit is 0. The
> strings I'm aware of are:
>
> QEMU, Bochs, KVM, Xen, VMware, VMW, VMware Inc., innotek GmbH, Oracle
> Corporation, Parallels, BHYVE, Microsoft Corporation
>
> whi
On Fri, Nov 13, 2020 at 08:12:39AM -0800, Luck, Tony wrote:
> > Of course is this not only an x86 problem. Every architecture which
> > supports virtualization has the same issue. ARM(64) has no way to tell
> > for sure whether the machine runs bare metal either. No idea about the
> > other archite
> Of course is this not only an x86 problem. Every architecture which
> supports virtualization has the same issue. ARM(64) has no way to tell
> for sure whether the machine runs bare metal either. No idea about the
> other architectures.
Sounds like a hypervisor problem. If the VMM provides perfe
On Fri, Nov 13 2020 at 02:42, Kevin Tian wrote:
>> From: Thomas Gleixner
> CPUID#1_ECX is a x86 thing. Do we need to figure out probably_on_
> bare_metal for every architecture altogether, or is it OK to just
> handle it for x86 arch at this stage? Based on previous discussions
> ims is just one
On Fri, Nov 13, 2020 at 02:42:02AM +, Tian, Kevin wrote:
> CPUID#1_ECX is a x86 thing. Do we need to figure out probably_on_
> bare_metal for every architecture altogether, or is it OK to just
> handle it for x86 arch at this stage? Based on previous discussions
> ims is just one piece of mul
> From: Thomas Gleixner
> Sent: Friday, November 13, 2020 6:43 AM
>
> On Thu, Nov 12 2020 at 14:32, Konrad Rzeszutek Wilk wrote:
> >> 4. Using CPUID to detect running as guest. But as Thomas pointed out, this
> >> approach is less reliable as not all hypervisors do this way.
> >
> > Is that truly
On Thu, Nov 12 2020 at 14:32, Konrad Rzeszutek Wilk wrote:
>> 4. Using CPUID to detect running as guest. But as Thomas pointed out, this
>> approach is less reliable as not all hypervisors do this way.
>
> Is that truly true? It is the first time I see the argument that extra
> steps are needed and
.monster snip..
> 4. Using CPUID to detect running as guest. But as Thomas pointed out, this
> approach is less reliable as not all hypervisors do this way.
Is that truly true? It is the first time I see the argument that extra
steps are needed and that checking for X86_FEATURE_HYPERVISOR is not
On Wed, Nov 11, 2020 at 02:17:48AM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Tuesday, November 10, 2020 10:24 PM
> >
> > On Tue, Nov 10, 2020 at 06:13:23AM -0800, Raj, Ashok wrote:
> >
> > > This isn't just for idxd, as I mentioned earlier, there are vendors other
> > > than
On Wed, Nov 11, 2020 at 03:03:21PM -0800, Raj, Ashok wrote:
> By default the DVSEC is not presented to guest even when the full PF is
> presented to guest. I believe VFIO only builds and presents known standard
> capabilities and specific extended capabilities. I'm a bit weak but maybe
> @AlexWill
On Wed, Nov 11, 2020 at 11:27:28PM +0100, Thomas Gleixner wrote:
> On Wed, Nov 11 2020 at 08:09, Ashok Raj wrote:
> >> > We'd also need a way for an OS running on bare metal to *know* that
> >> > it's on bare metal and can just compose MSI messages for itself. Since
> >> > we do expect bare metal t
Ashok,
On Wed, Nov 11 2020 at 15:03, Ashok Raj wrote:
> On Wed, Nov 11, 2020 at 11:27:28PM +0100, Thomas Gleixner wrote:
>> which is the obvious sane and safe logic. But sure, why am I asking for
>> sane and safe in the context of virtualization?
>
> We can pick how to solve this, and just waiting
On Wed, Nov 11 2020 at 08:09, Ashok Raj wrote:
> On Wed, Nov 11, 2020 at 03:41:59PM +, Christoph Hellwig wrote:
>> On Sun, Nov 08, 2020 at 07:36:34PM +, David Woodhouse wrote:
>> > So it does look like we're going to need a hypercall interface to
>> > compose an MSI message on behalf of the
On Wed, Nov 11, 2020 at 03:41:59PM +, Christoph Hellwig wrote:
> On Sun, Nov 08, 2020 at 07:36:34PM +, David Woodhouse wrote:
> > So it does look like we're going to need a hypercall interface to
> > compose an MSI message on behalf of the guest, for IMS to use. In fact
> > PCI devices assi
On Sun, Nov 08, 2020 at 07:36:34PM +, David Woodhouse wrote:
> So it does look like we're going to need a hypercall interface to
> compose an MSI message on behalf of the guest, for IMS to use. In fact
> PCI devices assigned to a guest could use that too, and then we'd only
> need to trap-and-r
> From: Raj, Ashok
> Sent: Tuesday, November 10, 2020 10:13 PM
>
> Thomas,
>
> With all these interrupt message storms ;-), I'm missing how to move
> towards
> an end goal.
>
> On Tue, Nov 10, 2020 at 11:27:29AM +0100, Thomas Gleixner wrote:
> > Ashok,
> >
> > On Mon, Nov 09 2020 at 21:14, Asho
> From: Jason Gunthorpe
> Sent: Tuesday, November 10, 2020 10:19 PM
> On Mon, Nov 09, 2020 at 09:14:12PM -0800, Raj, Ashok wrote:
>
> > was used for interrupt message storage (on the wire they follow the
> > same format), and also to ensure interoperability of devices
> > supporting IMS across CP
> From: Jason Gunthorpe
> Sent: Tuesday, November 10, 2020 10:24 PM
>
> On Tue, Nov 10, 2020 at 06:13:23AM -0800, Raj, Ashok wrote:
>
> > This isn't just for idxd, as I mentioned earlier, there are vendors other
> > than Intel already working on this. In all cases the need for guest direct
> > m
> Hi David
>
> I did't follow the support for 32768 CPUs in guest without IR support.
>
> Can you tell me how that is done?
Using bits 11-5 of the MSI address bits (the other 7 bits of "Extended
Destination ID" that aren't the Remappable Format indicator).
And physical addressing mode, which
On Tue, Nov 10, 2020 at 06:13:23AM -0800, Raj, Ashok wrote:
> This isn't just for idxd, as I mentioned earlier, there are vendors other
> than Intel already working on this. In all cases the need for guest direct
> manipulation of interrupt store hasn't come up. From the discussion, it
> seems lik
On Mon, Nov 09, 2020 at 09:14:12PM -0800, Raj, Ashok wrote:
> There are multiple tools (such as logic analyzers) and OEM test validation
> harnesses that depend on such DWORD sized DMA writes with no PASID as
> interrupt
> messages. One of the feedback we had received in the development of the
>
Hi David
I did't follow the support for 32768 CPUs in guest without IR support.
Can you tell me how that is done?
On Sun, Nov 08, 2020 at 03:25:57PM -0800, Ashok Raj wrote:
> On Sun, Nov 08, 2020 at 06:34:55PM +, David Woodhouse wrote:
> > >
> > > When we do interrupt remapping support in g
Thomas,
With all these interrupt message storms ;-), I'm missing how to move towards
an end goal.
On Tue, Nov 10, 2020 at 11:27:29AM +0100, Thomas Gleixner wrote:
> Ashok,
>
> On Mon, Nov 09 2020 at 21:14, Ashok Raj wrote:
> > On Mon, Nov 09, 2020 at 11:42:29PM +0100, Thomas Gleixner wrote:
> >>
Ashok,
On Mon, Nov 09 2020 at 21:14, Ashok Raj wrote:
> On Mon, Nov 09, 2020 at 11:42:29PM +0100, Thomas Gleixner wrote:
>> On Mon, Nov 09 2020 at 13:30, Jason Gunthorpe wrote:
> Approach to IMS is more of a phased approach.
>
> #1 Allow physical device to scale beyond limits of PCIe MSIx
>Fo
Hi Thomas,
On Mon, Nov 09, 2020 at 11:42:29PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 09 2020 at 13:30, Jason Gunthorpe wrote:
> >
> > The relavance of PASID is this:
> >
> >> Again, trap emulate does not work for IMS when the IMS store is software
> >> managed guest memory and not part of the
On Mon, Nov 09 2020 at 13:30, Jason Gunthorpe wrote:
> On Mon, Nov 09, 2020 at 12:21:22PM +0100, Thomas Gleixner wrote:
>> >> Is the IOMMU/Interrupt remapping unit able to catch such messages which
>> >> go outside the space to which the guest is allowed to signal to? If yes,
>> >> problem solved.
On Mon, Nov 09, 2020 at 01:30:34PM -0400, Jason Gunthorpe wrote:
>
> > Again, trap emulate does not work for IMS when the IMS store is software
> > managed guest memory and not part of the device. And that's the whole
> > reason why we are discussing this.
>
> With PASID tagged interrupts and a I
On Mon, Nov 09, 2020 at 03:08:17PM +0100, Thomas Gleixner wrote:
> On Mon, Nov 09 2020 at 12:14, Thomas Gleixner wrote:
> > On Sun, Nov 08 2020 at 15:58, Ashok Raj wrote:
> >> On Sun, Nov 08, 2020 at 07:47:24PM +0100, Thomas Gleixner wrote:
> >> But for SIOV devices there is no PASID filtering at t
On Mon, Nov 09, 2020 at 12:21:22PM +0100, Thomas Gleixner wrote:
> >> Is the IOMMU/Interrupt remapping unit able to catch such messages which
> >> go outside the space to which the guest is allowed to signal to? If yes,
> >> problem solved. If no, then IMS storage in guest memory can't ever work.
On Mon, Nov 09, 2020 at 07:37:03AM +, Tian, Kevin wrote:
> > 3) SIOV sub device assigned to the guest.
> >
> > The difference between SIOV and SRIOV is the device must attach a
> > PASID to every TLP triggered by the guest. Logically we'd expect
> > when IMS is used in this situat
On Mon, Nov 09 2020 at 12:14, Thomas Gleixner wrote:
> On Sun, Nov 08 2020 at 15:58, Ashok Raj wrote:
>> On Sun, Nov 08, 2020 at 07:47:24PM +0100, Thomas Gleixner wrote:
>> But for SIOV devices there is no PASID filtering at the remap level since
>> interrupt messages don't carry PASID in the TLP.
On Sun, Nov 08 2020 at 15:58, Ashok Raj wrote:
> On Sun, Nov 08, 2020 at 07:47:24PM +0100, Thomas Gleixner wrote:
>>
>>
>> Now if we look at the virtualization scenario and device hand through
>> then the structure in the guest view is not any different from the basic
>> case. This works with PCI
> From: Raj, Ashok
> Sent: Monday, November 9, 2020 7:59 AM
>
> Hi Thomas,
>
> [-] Jing, She isn't working at Intel anymore.
>
> Now this is getting compiled as a book :-).. Thanks a ton!
>
> One question on the hypercall case that isn't immediately
> clear to me.
>
> On Sun, Nov 08, 2020 at
> From: Jason Gunthorpe
> Sent: Monday, November 9, 2020 7:24 AM
>
> On Sun, Nov 08, 2020 at 07:47:24PM +0100, Thomas Gleixner wrote:
> >
> > That means the guest needs a way to ask the hypervisor for a proper
> > translation, i.e. a hypercall. Now where to do that? Looking at the
> > above remap
inux.intel.com; pbonz...@redhat.com; Ortiz, Samuel
> ; Hossain, Mona ;
> dmaeng...@vger.kernel.org; linux-kernel@vger.kernel.org; linux-
> p...@vger.kernel.org; k...@vger.kernel.org
> Subject: RE: [PATCH v4 06/17] PCI: add SIOV and IMS capability detection
>
> On Fri, Nov 06 2020 at
Hi Jason
On Sun, Nov 08, 2020 at 07:41:42PM -0400, Jason Gunthorpe wrote:
> On Sun, Nov 08, 2020 at 10:11:24AM -0800, Raj, Ashok wrote:
>
> > > On (kvm) virtualization the addr/data pair the IRQ domain hands out
> > > doesn't work. It is some fake thing.
> >
> > Is it really some fake thing? I t
Hi Thomas,
[-] Jing, She isn't working at Intel anymore.
Now this is getting compiled as a book :-).. Thanks a ton!
One question on the hypercall case that isn't immediately
clear to me.
On Sun, Nov 08, 2020 at 07:47:24PM +0100, Thomas Gleixner wrote:
>
>
> Now if we look at the virtualizatio
On Sun, Nov 08, 2020 at 10:11:24AM -0800, Raj, Ashok wrote:
> > On (kvm) virtualization the addr/data pair the IRQ domain hands out
> > doesn't work. It is some fake thing.
>
> Is it really some fake thing? I thought the vCPU and vector are real
> for a guest, and VMM ensures when interrupts are
Hi Jason,
On Sun, Nov 08, 2020 at 07:23:41PM -0400, Jason Gunthorpe wrote:
>
> IDXD is worring about case #4, I think, but I didn't follow in that
> whole discussion about the IMS table layout if they PASID tag the IMS
> MemWr or not?? Ashok can you clarify?
>
The PASID in the interrupt store i
On Sun, Nov 08, 2020 at 11:47:13PM +0100, Thomas Gleixner wrote:
> OTOH, what's the chance that a guest runs on something which
>
> 1) Does not have X86_FEATURE_HYPERVISOR set in cpuid 1/EDX
>
> and
>
> 2) Cannot be identified as Xen domain
>
> and
>
> 3) Does not have a DMI vendor entr
On Sun, Nov 08, 2020 at 06:34:55PM +, David Woodhouse wrote:
> >
> > When we do interrupt remapping support in guest which would be required
> > if we support x2apic in guest, I think this is something we should look
> > into more
> > carefully to make this work.
>
> No, interrupt remappin
On Sun, Nov 08, 2020 at 07:47:24PM +0100, Thomas Gleixner wrote:
>
> That means the guest needs a way to ask the hypervisor for a proper
> translation, i.e. a hypercall. Now where to do that? Looking at the
> above remapping case it's pretty obvious:
>
>
> |
>
> On Fri, Nov 06 2020 at 09:14, Jason Gunthorpe wrote:
>> On Fri, Nov 06, 2020 at 09:48:34AM +, Tian, Kevin wrote:
>> For instance you could put a "disable IMS" flag in the ACPI tables, in
>> the config space of the emuulated root port, or any other areas that
>> clearly belong to the platfo
On Sun, Nov 08 2020 at 22:09, David Woodhouse wrote:
>> On Fri, Nov 06 2020 at 09:14, Jason Gunthorpe wrote:
>>> On Fri, Nov 06, 2020 at 09:48:34AM +, Tian, Kevin wrote:
>>> For instance you could put a "disable IMS" flag in the ACPI tables, in
>>> the config space of the emuulated root port,
On Sun, Nov 08 2020 at 19:36, David Woodhouse wrote:
> On Sun, 2020-11-08 at 19:47 +0100, Thomas Gleixner wrote:
>> So this needs some thought.
>
> The problem here is that Intel implemented interrupt remapping in a way
> which is anathema to structured, ordered IRQ domains.
>
> When a guest writes
On Fri, Nov 06 2020 at 09:14, Jason Gunthorpe wrote:
> On Fri, Nov 06, 2020 at 09:48:34AM +, Tian, Kevin wrote:
> For instance you could put a "disable IMS" flag in the ACPI tables, in
> the config space of the emuulated root port, or any other areas that
> clearly belong to the platform.
>
> T
On Sun, 2020-11-08 at 19:47 +0100, Thomas Gleixner wrote:
> This only works when the guest OS actually knows that it runs in a
> VM. If the guest can't figure that out, i.e. via CPUID, this cannot be
> solved because from the guest OS view that's the same as running on bare
> metal. Obviously on ba
On Fri, Nov 06 2020 at 20:12, Jason Gunthorpe wrote:
> All IMS device drivers will work correctly. No VMM device emulation is
> ever needed to translate addr/data pairs.
>
> Earlier in this thread Kevin said hyper-v is already working this way,
> even for MSI/MSI-X. To me this says it is fundamenta
On Sun, 2020-11-08 at 10:11 -0800, Raj, Ashok wrote:
> Hi Jason
>
> Thanks, its now clear what you had mentioned earlier.
>
> I had couple questions/clarifications below. Thanks for working
> through this.
>
> On Fri, Nov 06, 2020 at 08:12:07PM -0400, Jason Gunthorpe wrote:
> > On Fri, Nov 06,
Hi Jason
Thanks, its now clear what you had mentioned earlier.
I had couple questions/clarifications below. Thanks for working
through this.
On Fri, Nov 06, 2020 at 08:12:07PM -0400, Jason Gunthorpe wrote:
> On Fri, Nov 06, 2020 at 03:47:00PM -0800, Dan Williams wrote:
>
> > Also feel free to
On Fri, Nov 6, 2020 at 4:12 PM Jason Gunthorpe wrote:
>
> On Fri, Nov 06, 2020 at 03:47:00PM -0800, Dan Williams wrote:
[..]
> The only sane way to implement this generically is for the VMM to
> provide a hypercall to obtain a real *working* addr/data pair(s) and
> then have the platform hand thos
On Fri, Nov 06 2020 at 09:48, Kevin Tian wrote:
>> From: Jason Gunthorpe
>> On Wed, Nov 04, 2020 at 01:34:08PM +, Tian, Kevin wrote:
>> The interrupt controller is responsible to create an addr/data pair
>> for an interrupt message. It sets the message format and ensures it
>> routes to the pr
On Fri, Nov 06, 2020 at 03:47:00PM -0800, Dan Williams wrote:
> Also feel free to straighten me out (Jason or Ashok) if I've botched
> the understanding of this.
It is pretty simple when you get down to it.
We have a new kernel API that Thomas added:
pci_subdevice_msi_create_irq_domain()
Thi
On Fri, Nov 6, 2020 at 9:51 AM Jason Gunthorpe wrote:
[..]
> > This is true for IMS as well. But probably not implemented in the kernel as
> > such. From a HW point of view (take idxd for instance) the facility is
> > available to native OS as well. The early RFC supported this for native.
>
> I c
On Fri, Nov 06, 2020 at 08:48:50AM -0800, Raj, Ashok wrote:
> > The IMS flag belongs in the platform not in the devices.
>
> This support is mostly a SW thing right? we don't need to muck with
> platform/ACPI for that matter.
Something needs to tell the guest OS platform what to do, so you need
Hi Jason
On Fri, Nov 06, 2020 at 09:14:15AM -0400, Jason Gunthorpe wrote:
> On Fri, Nov 06, 2020 at 09:48:34AM +, Tian, Kevin wrote:
> > > The interrupt controller is responsible to create an addr/data pair
> > > for an interrupt message. It sets the message format and ensures it
> > > routes
On Fri, Nov 06, 2020 at 09:48:34AM +, Tian, Kevin wrote:
> > The interrupt controller is responsible to create an addr/data pair
> > for an interrupt message. It sets the message format and ensures it
> > routes to the proper CPU interrupt handler. Everything about the
> > addr/data pair is own
> From: Jason Gunthorpe
> Sent: Wednesday, November 4, 2020 9:54 PM
>
> On Wed, Nov 04, 2020 at 01:34:08PM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Wednesday, November 4, 2020 8:40 PM
> > >
> > > On Wed, Nov 04, 2020 at 03:41:33AM +, Tian, Kevin wrote:
> > > > > From
On Wed, Nov 04, 2020 at 01:34:08PM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Wednesday, November 4, 2020 8:40 PM
> >
> > On Wed, Nov 04, 2020 at 03:41:33AM +, Tian, Kevin wrote:
> > > > From: Jason Gunthorpe
> > > > Sent: Tuesday, November 3, 2020 8:44 PM
> > > >
> > > >
> From: Jason Gunthorpe
> Sent: Wednesday, November 4, 2020 8:40 PM
>
> On Wed, Nov 04, 2020 at 03:41:33AM +, Tian, Kevin wrote:
> > > From: Jason Gunthorpe
> > > Sent: Tuesday, November 3, 2020 8:44 PM
> > >
> > > On Tue, Nov 03, 2020 at 02:49:27AM +, Tian, Kevin wrote:
> > >
> > > > >
On Wed, Nov 04, 2020 at 03:41:33AM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Tuesday, November 3, 2020 8:44 PM
> >
> > On Tue, Nov 03, 2020 at 02:49:27AM +, Tian, Kevin wrote:
> >
> > > > There is a missing hypercall to allow the guest to do this on its own,
> > > > presu
> From: Jason Gunthorpe
> Sent: Tuesday, November 3, 2020 8:44 PM
>
> On Tue, Nov 03, 2020 at 02:49:27AM +, Tian, Kevin wrote:
>
> > > There is a missing hypercall to allow the guest to do this on its own,
> > > presumably it will someday be fixed so IMS can work in guests.
> >
> > Hypercall
On Tue, Nov 03, 2020 at 02:49:27AM +, Tian, Kevin wrote:
> > There is a missing hypercall to allow the guest to do this on its own,
> > presumably it will someday be fixed so IMS can work in guests.
>
> Hypercall is VMM specific, while IMS cap provides a VMM-agnostic
> interface so any guest
> From: Jason Gunthorpe
> Sent: Monday, November 2, 2020 9:22 PM
>
> On Fri, Oct 30, 2020 at 03:49:22PM -0700, Dave Jiang wrote:
> >
> >
> > On 10/30/2020 3:45 PM, Jason Gunthorpe wrote:
> > > On Fri, Oct 30, 2020 at 02:20:03PM -0700, Dave Jiang wrote:
> > > > So the intel-iommu driver checks for
On Fri, Oct 30, 2020 at 03:49:22PM -0700, Dave Jiang wrote:
>
>
> On 10/30/2020 3:45 PM, Jason Gunthorpe wrote:
> > On Fri, Oct 30, 2020 at 02:20:03PM -0700, Dave Jiang wrote:
> > > So the intel-iommu driver checks for the SIOV cap. And the idxd driver
> > > checks for SIOV and IMS cap. There wil
On 10/30/2020 3:45 PM, Jason Gunthorpe wrote:
On Fri, Oct 30, 2020 at 02:20:03PM -0700, Dave Jiang wrote:
So the intel-iommu driver checks for the SIOV cap. And the idxd driver
checks for SIOV and IMS cap. There will be other upcoming drivers that will
check for such cap too. It is Intel vend
On Fri, Oct 30, 2020 at 02:20:03PM -0700, Dave Jiang wrote:
> So the intel-iommu driver checks for the SIOV cap. And the idxd driver
> checks for SIOV and IMS cap. There will be other upcoming drivers that will
> check for such cap too. It is Intel vendor specific right now, but SIOV is
> public an
On Fri, Oct 30, 2020 at 02:20:03PM -0700, Dave Jiang wrote:
>
>
> On 10/30/2020 12:51 PM, Bjorn Helgaas wrote:
> > On Fri, Oct 30, 2020 at 11:51:32AM -0700, Dave Jiang wrote:
> > > Intel Scalable I/O Virtualization (SIOV) enables sharing of I/O devices
> > > across isolated domains through PASID
On 10/30/2020 12:51 PM, Bjorn Helgaas wrote:
On Fri, Oct 30, 2020 at 11:51:32AM -0700, Dave Jiang wrote:
Intel Scalable I/O Virtualization (SIOV) enables sharing of I/O devices
across isolated domains through PASID based sub-device partitioning.
Interrupt Message Storage (IMS) enables devices
On Fri, Oct 30, 2020 at 11:51:32AM -0700, Dave Jiang wrote:
> Intel Scalable I/O Virtualization (SIOV) enables sharing of I/O devices
> across isolated domains through PASID based sub-device partitioning.
> Interrupt Message Storage (IMS) enables devices to store the interrupt
> messages in a devic
83 matches
Mail list logo