On Thu, Nov 09, 2017 at 01:54:57PM -0700, Alex Williamson wrote:
> On Thu, 9 Nov 2017 19:35:14 +0100
> Gerd Hoffmann wrote:
>
> > Hi,
> >
> > > struct vfio_device_gfx_plane_info lacks the head field we've been
> > > discussing. Thanks,
> >
sources according to the new exposed buffer
> or just re-use the existing resource related to the old buffer.
Reviewed-by: Gerd Hoffmann
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
.kraxel.org/qemu branch: work/intel-vgpu
>
> A topic branch with the latest patch set is:
> https://github.com/intel/gvt-linux.git branch: topic/dmabuf
Tested-by: Gerd Hoffmann
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
> > > This patch set can be tried with the following example:
> > > git://git.kraxel.org/qemu branch: work/intel-vgpu
> > >
> >
> > Tested-by: Gerd Hoffmann
>
> Hi Gerd,
>
> Can you share the xml snippets required for the VM to make t
Hi,
> Yeah, that could solve the problem. But I'm not sure if it could be
> acceptable.
> Zhenyu, can you share your comments?
Ping? Any progress here? We are at 4.14-rc3 already. v15 is needed
really soon now otherwise the 4.15 merge window will be missed.
cheers,
Gerd
_
object.
> With the fd of this dma-buf, userspace can directly handle this
> buffer.
Tested-by: Gerd Hoffmann
cheers,
Gerd
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
tracked by
> kernel.
> The returned fd in struct vfio_device_query_gfx_plane can be a new
> fd or an old fd of a re-exported dma-buf. Host user mode can check
> the
> value of fd and to see if it needs to create new resource according
> to
> the new fd or just use the existed resour
Hi,
> No, the parameter wouldn't be a char, you'd use an __u32 for the
> dmabuf_id. I'm just referencing the structure of the GET_DEVICE_FD
> as
> an ioctl which returns an fd through the return value and takes a
> single parameter. I don't mean to imply matching the type of that
> parameter.
Hi,
> These are both from Gerd. Gerd, do you have any objection to using a
> union to provide either the dmabuf fd or region index?
No.
> > It's like we want to propose a general interface used to share
> > guest's buffer with host. And the
> > general interface, so far, has two choice: regio
On Fri, 2017-08-18 at 18:21 +0800, Tina Zhang wrote:
> +/**
> + * VFIO_DEVICE_QUERY_GFX_PLANE - _IOW(VFIO_TYPE, VFIO_BASE + 14,
> struct vfio_device_query_gfx_plane)
> + *
> + * Set the drm_plane_type and flags, then retrieve information about
> the gfx plane.
> + *
> + * flags:
> + * VFIO_GFX_
On Fri, 2017-08-18 at 18:21 +0800, Tina Zhang wrote:
> v13->v14:
> 1) add PROBE, DMABUF and REGION flags. (Alex)
> 2) return -ENXIO when gem proxy object is banned by ioctl.
> (Chris) (Daniel)
> 3) add some details about the float pixel format. (Daniel)
> 4) add F suffix to the defined name. (Da
Hi,
> > However, I see VFIO_DEVICE_QUERY_GFX_PLANE failures which I think
> > should
> > not be there. When the guest didn't define a plane yet I get "No
> > such device"
> > errors instead of a plane_info struct with fields (drm_format,
> > width, height, size)
> > set to zero. I also see "Ba
Hi,
> In KVMGT, we need to register an iodev only *after* BAR
> registers are
> written by guest.
Oh, the guest can write the bar register at any time. Typically it
happens at boot only, but it can also happen at runtime, for example on
reboot.
I've also seen the
On Sa, 2014-12-06 at 12:17 +0800, Jike Song wrote:
> On 12/05/2014 04:50 PM, Gerd Hoffmann wrote:
> > A few comments on the kernel stuff (brief look so far, also
> > compile-tested only, intel gfx on my test machine is too old).
> >
> > * Noticed the kernel b
Hi,
> >> Out of curiosity, what will be the mechanism to prevent a vGPU instance
> >> from ignoring the ballooning data? Must be something in the hypervisor
> >> blocking pass-through access to such domains?
> > Well, although we have range check logic in the host side(which checks
> > the legal
Hi,
> I didn't figure out how each domain knowns which fences to use? They
> know how many, but which ones?
I think the guest doesn't have to know because mmio access is trapped by
the hypervisor anyway.
cheers,
Gerd
___
Intel-gfx mailing list
I
On Di, 2014-12-16 at 15:01 +, Tvrtko Ursulin wrote:
> Hi,
>
> On 12/16/2014 02:41 PM, Gerd Hoffmann wrote:
> >> I didn't figure out how each domain knowns which fences to use? They
> >> know how many, but which ones?
> >
> > I think the guest
Hi,
> > It's not possible to allow guests direct access to the fence registers
> > though. And if every fence register access traps into the hypervisor
> > anyway the hypervisor can easily map the guest virtual fence to host
> > physical fence, so there is no need to tell the guest which fences
Hi,
> Stuf like driver load/unload, suspend/resume, runtime pm and gpu reset are
> already supre-fragile as-is. Every time we change something in there, a
> bunch of related things fall apart. With vgt we'll have even more
> complexity in there, and I really think we need to make that complexity
Hi,
> I guess from a really high level this boils down to having a xen-like
> design (where the hypervisor and dom0 driver are separate, but cooperate
> somewhat) or kvm (where the virtualization sits on top of a normal
> kernel). Afaics the kvm model seems to have a lot more momentum overall.
201 - 220 of 220 matches
Mail list logo