bit-stream compatibility.
> >
>
> I have another question for this, if we restore the config space while in
> pre-copy
> (include enabling interrupts), does it affect the _RESUMING state (paused) of
> the
> device on the dst host (cause it to send interrupts? which should not be
> allowed
> in this stage). Does the restore sequence need to be further discussed and
> reach
> a consensus(spec) (taking into account other devices and the corresponding
> actions
> of the vendor driver)?
>
> > Given our timing relative to QEMU 5.2, the only path I feel comfortable
> > with is to move forward with downgrading vfio migration support to be
> > enabled via an experimental option. Objections? Thanks,
>
> Alright, but this issue is related to our ARM GICv4.1 migration scheme, could
> you
> give a rough idea about this (where to enable interrupts, we hope it to be
> after
> the restoring of VGIC)?
I disagree. If this is only specific to Huawei ARM GIC implementation, why do
we want to
make the entire VFIO based migration an experimental feature?
Thanks,
Neo
>
> Thanks,
> Shenming
ng reason to mark it experimental. There's
> > >> clearly demand for vfio device migration and even if the practical use
> > >> cases are initially small, they will expand over time and hardware will
> > >> get better. My objection is that the current behavi
he incompatibility case in the
> first place, by only choosing to migrate to a host that we know is going
> to be compatible.
>
> This would need some kind of way to report the full list of supported
> versions against the mdev supported types on the host.
What would be the typical scenario / use case for mgmt layer to query the
version
information? Do they expect this get done completely offline as long as the
vendor driver installed on each host?
Thanks,
Neo
>
>
uick thought - would be possible / better to have Kirti focus on the
QEMU
patches and Yan take care GVT-g kernel driver side changes? This will give us
the best testing coverage. Hope I don't step on anybody's toes here. ;-)
Thanks,
Neo
>
> Thanks
> Kevin
o/mdev/mdev_private.h
> > create mode 100644 drivers/vfio/mdev/mdev_sysfs.c
> > create mode 100644 drivers/vfio/mdev/vfio_mdev.c
> > create mode 100644 include/linux/mdev.h
> > create mode 100644 samples/vfio-mdev/Makefile
> > create mode 100644 samples/vfio-mdev/mt
On Fri, Oct 14, 2016 at 10:51:24AM -0600, Alex Williamson wrote:
> On Fri, 14 Oct 2016 09:35:45 -0700
> Neo Jia wrote:
>
> > On Fri, Oct 14, 2016 at 08:46:01AM -0600, Alex Williamson wrote:
> > > On Fri, 14 Oct 2016 08:41:58 -0600
> > > Alex Williamson wrote
gt; On 11/10/2016 04:39, Xiao Guangrong wrote:
> > > >>>>
> > > >>>>
> > > >>>> On 10/11/2016 02:32 AM, Paolo Bonzini wrote:
> > > >>>>>
> > > >>>>>
> > > >>>
Regarding the patch development and given the current status, especially where
we are and what we have been through, I am very confident that we should be able
to fully handle this ourselves, but thanks for offering help anyway!
We should be able to react as fast as possible based on the publi
"gpu" which will pull several attributes as
mandatory.
Thanks,
Neo
>
>
>
> PART 1: mdev core driver
>
> [task]
> - the mdev bus/device support
> - the utilities of mdev lifecycle management
>
On Thu, Sep 29, 2016 at 09:03:40AM +0100, Daniel P. Berrange wrote:
> On Wed, Sep 28, 2016 at 12:22:35PM -0700, Neo Jia wrote:
> > On Thu, Sep 22, 2016 at 03:26:38PM +0100, Daniel P. Berrange wrote:
> > > On Thu, Sep 22, 2016 at 08:19:21AM -0600, Alex Williamson wrote:
> >
On Wed, Sep 28, 2016 at 04:31:25PM -0400, Laine Stump wrote:
> On 09/28/2016 03:59 PM, Neo Jia wrote:
> > On Wed, Sep 28, 2016 at 07:45:38PM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...@nvidia.com]
> > > > Sent: Thursday, September 29, 2016 3:23 A
On Wed, Sep 28, 2016 at 01:55:47PM -0600, Alex Williamson wrote:
> On Wed, 28 Sep 2016 12:22:35 -0700
> Neo Jia wrote:
>
> > On Thu, Sep 22, 2016 at 03:26:38PM +0100, Daniel P. Berrange wrote:
> > > On Thu, Sep 22, 2016 at 08:19:21AM -0600, Alex Williamson wrote:
> &
On Wed, Sep 28, 2016 at 07:45:38PM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Thursday, September 29, 2016 3:23 AM
> >
> > On Thu, Sep 22, 2016 at 03:26:38PM +0100, Daniel P. Berrange wrote:
> > > On Thu, Sep 22, 2016 at 08:19:
'd expect to see a 'class' or 'type' attribute in the
> directory whcih tells you what kind of mdev it is. A
> valid 'class' value would be 'gpu'. The fb_length,
> resolution, and heads parameters would only be mandatory
> when class==gpu.
&g
;t control that. right?
> > >
> > >
> > > >>>> Lets remove 'id' from type id in XML if that is the concern.
> > > >>>> Supported
> > > >>>> types is going to be defined by vendor driver, so let vendor driv
the
> framework, but
> how if vfio.ko, or vfio-pci.ko add a few new capabilities in
> the future?
Exactly my point about the code sharing.
If new cap is added inside vfio.ko or vfio-pci.ko, we can just add it into
vfio_mdev.ko.
Adding the code in one place is
On Wed, Sep 07, 2016 at 07:27:19PM +0100, Daniel P. Berrange wrote:
> On Wed, Sep 07, 2016 at 11:17:39AM -0700, Neo Jia wrote:
> > On Wed, Sep 07, 2016 at 10:44:56AM -0600, Alex Williamson wrote:
> > > On Wed, 7 Sep 2016 21:45:31 +0530
> > > Kirti Wankhede wrote:
>
preciate your thoughts on this issue, and consideration of how NVIDIA
vGPU device model works, but so far I still feel we are borrowing a very
meaningful concept "iommu group" to solve an device model issues, which I
actually
hope can be workarounded by a more independent piece of logic, an
et you in person in KVM forum couple weeks ago so we can have a
better discussion.
We are trying our best to accommodate almost all requirements / comments from
use cases and code reviews while keeping little (or none) architectural changes
between revisions.
> We would be highly glad and t
ce framework.
> >
> > Signed-off-by: Kirti Wankhede
> > Signed-off-by: Neo Jia
> > Signed-off-by: Jike Song
> > ---
> > Documentation/vfio-mediated-device.txt | 203
> > +
> > 1 file changed, 203 insertions(+)
>
st
accommodate your requirements and needs in the future revisions.
I believe that would be the best and fastest way to collaborate and that is the
main purpose of having code review cycles.
Thanks,
Neo
>
> Alex
>
> >
> >
> > Key Changes from Nvidia v6:
> >
On Fri, Aug 19, 2016 at 03:22:48PM -0400, Laine Stump wrote:
> On 08/18/2016 12:41 PM, Neo Jia wrote:
> > Hi libvirt experts,
> >
> > I am starting this email thread to discuss the potential solution /
> > proposal of
> > integrating vGPU support into libv
On Fri, Aug 19, 2016 at 02:42:27PM +0200, Michal Privoznik wrote:
> On 18.08.2016 18:41, Neo Jia wrote:
> > Hi libvirt experts,
>
> Hi, welcome to the list.
>
> >
> > I am starting this email thread to discuss the potential solution /
> > proposal of
>
g.
For hot-unplug, after executing QEMU monitor "device del" command, libvirt needs
to write to "destroy" sysfs to complete hot-unplug process.
Since hot-plug is optional, then mdev_create or mdev_destroy operations may
return an error if it is not supported.
Thanks,
Neo
work of mimicing the
> vfio-mpci code in my vfio-mccw driver. I like this incremental patches.
Thanks for sharing your progress and good to know our current v6 solution works
for you. We are still evaluating the vfio_mdev changes here as I still prefer to
share general VFIO pci handling inside a
On Tue, Aug 16, 2016 at 02:51:03PM -0600, Alex Williamson wrote:
> On Tue, 16 Aug 2016 13:30:06 -0700
> Neo Jia wrote:
>
> > On Mon, Aug 15, 2016 at 04:47:41PM -0600, Alex Williamson wrote:
> > > On Mon, 15 Aug 2016 12:59:08 -0700
> > > Neo Jia wrote:
> &g
On Mon, Aug 15, 2016 at 04:47:41PM -0600, Alex Williamson wrote:
> On Mon, 15 Aug 2016 12:59:08 -0700
> Neo Jia wrote:
>
> > > > >
> > > > > I'm not sure a comma separated list makes sense here, for both
> > > > > simplic
On Tue, Aug 16, 2016 at 05:58:54AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, August 16, 2016 1:44 PM
> >
> > On Tue, Aug 16, 2016 at 04:52:30AM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...@nvidia.c
On Tue, Aug 16, 2016 at 04:52:30AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, August 16, 2016 12:17 PM
> >
> > On Tue, Aug 16, 2016 at 03:50:44AM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...@nvidia.c
On Tue, Aug 16, 2016 at 03:50:44AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, August 16, 2016 11:46 AM
> >
> > On Tue, Aug 16, 2016 at 12:30:25AM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...@nvidia.c
On Tue, Aug 16, 2016 at 12:30:25AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, August 16, 2016 3:59 AM
> > > >
> > > > For NVIDIA vGPU solution we need to know all devices assigned to a VM in
> > > > one s
On Mon, Aug 15, 2016 at 04:47:41PM -0600, Alex Williamson wrote:
> On Mon, 15 Aug 2016 12:59:08 -0700
> Neo Jia wrote:
>
> > On Mon, Aug 15, 2016 at 09:38:52AM +, Tian, Kevin wrote:
> > > > From: Kirti Wankhede [mailto:kwankh...@nvidia.com]
> > > >
On Mon, Aug 15, 2016 at 04:52:39PM -0600, Alex Williamson wrote:
> On Mon, 15 Aug 2016 15:09:30 -0700
> Neo Jia wrote:
>
> > On Mon, Aug 15, 2016 at 09:59:26AM -0600, Alex Williamson wrote:
> > > On Mon, 15 Aug 2016 09:38:52 +
> > > "Tian, Kevin&q
sn't planning to support hotplug initially, but
> this seems like we're precluding hotplug from the design. I don't
> understand what's driving this one-shot requirement.
Hi Alex,
The requirement here is based on how our internal vGPU device model designed and
with this we are able to pre-allocate resources required for multiple virtual
devices within same domain.
And I don't think this syntax will stop us from supporting hotplug at all.
For example, you can always create a virtual mdev and then do
echo "mdev_UUID" > /sys/class/mdev/mdev_start
then use QEMU monitor to add the device for hotplug.
>
> > As I relied in another mail, I really hope start/stop become a per-mdev
> > attribute instead of global one, e.g.:
> >
> > echo "0/1" > /sys/class/mdev/12345678-1234-1234-1234-123456789abc/start
> >
> > In many scenario the user space client may only want to talk to mdev
> > instance directly, w/o need to contact its parent device. Still take
> > live migration for example, I don't think Qemu wants to know parent
> > device of assigned mdev instances.
>
> Yep, QEMU won't know the parent device, only libvirt level tools
> managing the creation and destruction of the mdev device would know
> that. Perhaps in addition to migration uses we could even use
> start/stop for basic power management, device D3 state in the guest
> could translate to a stop command to remove that vGPU from scheduling
> while still retaining most of the state and resource allocations.
Just recap what I have replied to Kevin on his previous email, the current
mdev_start and mdev_stop doesn't require any knowledge of parent device.
Thanks,
Neo
> Thanks,
>
> Alex
scent
> mdev activity on source device before mdev hardware state is snapshot,
> and then resume mdev activity on dest device after its state is recovered.
> Intel has implemented experimental live migration support in KVMGT (soon
> to release), based on above two interfaces (plus another two to get/set
> mdev state).
>
> > >
> >
> > For NVIDIA vGPU solution we need to know all devices assigned to a VM in
> > one shot to commit resources of all vGPUs assigned to a VM along with
> > some common resources.
>
> Kirti, can you elaborate the background about above one-shot commit
> requirement? It's hard to understand such a requirement.
>
> As I relied in another mail, I really hope start/stop become a per-mdev
> attribute instead of global one, e.g.:
>
> echo "0/1" > /sys/class/mdev/12345678-1234-1234-1234-123456789abc/start
>
> In many scenario the user space client may only want to talk to mdev
> instance directly, w/o need to contact its parent device. Still take
> live migration for example, I don't think Qemu wants to know parent
> device of assigned mdev instances.
Hi Kevin,
Having a global /sys/class/mdev/mdev_start doesn't require anybody to know
parent device. you can just do
echo "mdev_UUID" > /sys/class/mdev/mdev_start
or
echo "mdev_UUID" > /sys/class/mdev/mdev_stop
without knowing the parent device.
Thanks,
Neo
>
> Thanks
> Kevin
On Wed, Jun 08, 2016 at 02:13:49PM +0800, Dong Jia wrote:
> On Tue, 7 Jun 2016 20:48:42 -0700
> Neo Jia wrote:
>
> > On Wed, Jun 08, 2016 at 11:18:42AM +0800, Dong Jia wrote:
> > > On Tue, 7 Jun 2016 19:39:21 -0600
> > > Alex Williamson wrote:
> > >
hat.com]
> > > > Sent: Wednesday, June 08, 2016 6:42 AM
> > > >
> > > > On Tue, 7 Jun 2016 03:03:32 +
> > > > "Tian, Kevin" wrote:
> > > >
> > > > > > From: Alex Williamson [mailto:alex.william...@r
On Mon, Jun 06, 2016 at 04:29:11PM +0800, Dong Jia wrote:
> On Sun, 5 Jun 2016 23:27:42 -0700
> Neo Jia wrote:
>
> 2. VFIO_DEVICE_CCW_CMD_REQUEST
> This intends to handle an intercepted channel I/O instruction. It
> basically need to do the following thing:
May I ask how a
sors) and CCWs (channel
> command words) to handle I/O operations.
>
> > I'm curious to know. Are you planning to write a driver (vfio-mccw) for
> > mediated ccw device?
> I wrote two drivers:
> 1. A vfio-pccw driver for the physical ccw device, which will reigister
> th
long pg_cnt = 1;
> > +
> > + iova = vaddr[i] << PAGE_SHIFT;
> Dear Kirti:
>
> Got one question for the vaddr-iova conversion here.
> Is this a common rule that can be applied to all architectures?
> AFAIK, this is wrong for the
On Fri, May 13, 2016 at 05:23:44PM +0800, Jike Song wrote:
> On 05/13/2016 04:31 PM, Neo Jia wrote:
> > On Fri, May 13, 2016 at 07:45:14AM +, Tian, Kevin wrote:
> >>
> >> We use page tracking framework, which is newly added to KVM recently,
> >> to mark RAM
On Fri, May 13, 2016 at 05:46:17PM +0800, Jike Song wrote:
> On 05/13/2016 04:12 AM, Neo Jia wrote:
> > On Thu, May 12, 2016 at 01:05:52PM -0600, Alex Williamson wrote:
> >>
> >> If you're trying to equate the scale of what we need to track vs what
>
On Fri, May 13, 2016 at 04:39:37PM +0800, Dong Jia wrote:
> On Fri, 13 May 2016 00:24:34 -0700
> Neo Jia wrote:
>
> > On Fri, May 13, 2016 at 03:10:22PM +0800, Dong Jia wrote:
> > > On Thu, 12 May 2016 13:05:52 -0600
> > > Alex Williamson wrote:
> > >
On Fri, May 13, 2016 at 08:02:41AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Friday, May 13, 2016 3:38 PM
> >
> > On Fri, May 13, 2016 at 07:13:44AM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...@nvidia.com]
On Fri, May 13, 2016 at 07:45:14AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Friday, May 13, 2016 3:42 PM
> >
> > On Fri, May 13, 2016 at 03:30:27PM +0800, Jike Song wrote:
> > > On 05/13/2016 02:43 PM, Neo Jia wrote:
> >
On Fri, May 13, 2016 at 03:30:27PM +0800, Jike Song wrote:
> On 05/13/2016 02:43 PM, Neo Jia wrote:
> > On Fri, May 13, 2016 at 02:22:37PM +0800, Jike Song wrote:
> >> On 05/13/2016 10:41 AM, Tian, Kevin wrote:
> >>>> From: Neo Jia [mailto:c...@nvidia.com] Sent: Fr
On Fri, May 13, 2016 at 07:13:44AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Friday, May 13, 2016 2:42 PM
> >
> >
> > >
> > > We possibly have the same requirement from the mediate driver backend:
> > >
&g
edhat.com]
> > > > Sent: Thursday, May 12, 2016 6:06 AM
> > > >
> > > > On Wed, 11 May 2016 17:15:15 +0800
> > > > Jike Song wrote:
> > > >
> > > > > On 05/11/2016 12:02 AM, Neo Jia wrote:
> > > > > &
On Fri, May 13, 2016 at 02:22:37PM +0800, Jike Song wrote:
> On 05/13/2016 10:41 AM, Tian, Kevin wrote:
> >> From: Neo Jia [mailto:c...@nvidia.com]
> >> Sent: Friday, May 13, 2016 3:49 AM
> >>
> >>>
> >>>> Perhaps one possibility would be
On Fri, May 13, 2016 at 02:08:36PM +0800, Jike Song wrote:
> On 05/13/2016 03:49 AM, Neo Jia wrote:
> > On Thu, May 12, 2016 at 12:11:00PM +0800, Jike Song wrote:
> >> On Thu, May 12, 2016 at 6:06 AM, Alex Williamson
> >> wrote:
> >>> On Wed, 11 May 201
> > On Wed, 11 May 2016 17:15:15 +0800
> > > Jike Song wrote:
> > >
> > > > On 05/11/2016 12:02 AM, Neo Jia wrote:
> > > > > On Tue, May 10, 2016 at 03:52:27PM +0800, Jike Song wrote:
> > > > >> On 05/05/2016 05:27 PM,
On Thu, May 12, 2016 at 12:11:00PM +0800, Jike Song wrote:
> On Thu, May 12, 2016 at 6:06 AM, Alex Williamson
> wrote:
> > On Wed, 11 May 2016 17:15:15 +0800
> > Jike Song wrote:
> >
> >> On 05/11/2016 12:02 AM, Neo Jia wrote:
> >> > On Tue, May
u case, since
> > that fact is out of GPU driver control. A simple way is to use
> > dma_map_page which internally will cope with w/ and w/o iommu
> > case gracefully, i.e. return HPA w/o iommu and IOVA w/ iommu.
> > Then in this file we only need to cache GPA to whatever dm
e program once.
> > According to my understanding of your proposal, I should do:
> >
> > #1. Introduce a vfio_iommu_type1_ccw as the vfio iommu backend for ccw.
> > When starting the guest, pin all of guest mem
to me how
> > > > the vendor driver determines what this maps to, do they compare it to
> > > > the physical device's own BAR addresses?
> > >
> > > I didn't quite understand too. Based on earlier discussion, do we need
> > > something li
a real device, since
> > there may be no physical config space implemented for each vGPU.
> > So anyway vendor vGPU driver needs to create/emulate the virtualized
> > config space while the way how is created might be vendor specific.
> > So better to keep the interfac
e. VFIO Type1 IOMMU patch provide new set of APIs
> > for
> > guest page translation.
> >
> > What's left to do?
> > VFIO driver for vGPU device doesn't support devices with MSI-X enabled.
> >
> > Please review.
> >
>
> Thanks Kirti
On Fri, Mar 11, 2016 at 10:56:24AM -0700, Alex Williamson wrote:
> On Fri, 11 Mar 2016 08:55:44 -0800
> Neo Jia wrote:
>
> > > > Alex, what's your opinion on this?
> > >
> > > The sticky point is how vfio, which is only handling the vGPU, has a
>
On Fri, Mar 11, 2016 at 09:13:15AM -0700, Alex Williamson wrote:
> On Fri, 11 Mar 2016 04:46:23 +
> "Tian, Kevin" wrote:
>
> > > From: Neo Jia [mailto:c...@nvidia.com]
> > > Sent: Friday, March 11, 2016 12:20 PM
> > >
> > > On
On Fri, Mar 11, 2016 at 04:46:23AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Friday, March 11, 2016 12:20 PM
> >
> > On Thu, Mar 10, 2016 at 11:10:10AM +0800, Jike Song wrote:
> > >
> > > >> Is it supposed to b
with the real GPU should be managed by the GPU vendor
driver.
With the default TYPE1 IOMMU, it works with the vfio-pci as it owns the device.
Thanks,
Neo
> --
> Thanks,
> Jike
>
On Mon, Mar 07, 2016 at 02:07:15PM +0800, Jike Song wrote:
> Hi Neo,
>
> On Fri, Mar 4, 2016 at 3:00 PM, Neo Jia wrote:
> > On Wed, Mar 02, 2016 at 04:38:34PM +0800, Jike Song wrote:
> >> On 02/24/2016 12:24 AM, Kirti Wankhede wrote:
> >>
On Wed, Mar 02, 2016 at 04:38:34PM +0800, Jike Song wrote:
> On 02/24/2016 12:24 AM, Kirti Wankhede wrote:
> > + vgpu_dma->size = map->size;
> > +
> > + vgpu_link_dma(vgpu_iommu, vgpu_dma);
>
> Hi Kirti & Neo,
>
> seems that no one actually setup
On Mon, Feb 29, 2016 at 05:39:02AM +, Tian, Kevin wrote:
> > From: Kirti Wankhede
> > Sent: Wednesday, February 24, 2016 12:24 AM
> >
> > Signed-off-by: Kirti Wankhede
> > Signed-off-by: Neo Jia
>
> Hi, Kirti/Neo,
>
> Thanks a lot for you upda
A trivial change to remove string limit by using g_strdup_printf
Tested-by: Neo Jia
Signed-off-by: Neo Jia
Signed-off-by: Kirti Wankhede
---
hw/vfio/pci.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 30eb945a4fc1..d091d8cf0e6e
A trivial change to remove string limit by using g_strdup_printf
and g_strconcat
Tested-by: Neo Jia
Signed-off-by: Neo Jia
Signed-off-by: Kirti Wankhede
---
hw/vfio/pci.c | 19 ---
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
object should have its own uuid.
>
> You can use uuids to name the vgpus if you want of course. But the vgpu
> uuid will will have no relationship whatsoever to the vm uuid then.
>
Agree. I should made it clear that it should be a separate object.
Thanks,
Neo
> cheers,
> Gerd
>
On Wed, Feb 17, 2016 at 09:52:04AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Wednesday, February 17, 2016 5:35 PM
> >
> > On Wed, Feb 17, 2016 at 08:57:08AM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...
On Wed, Feb 17, 2016 at 08:57:08AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Wednesday, February 17, 2016 3:55 PM
>
> 'whoever' is too strict here. I don't think UUID is required in all scenarios.
>
> In your scen
On Wed, Feb 17, 2016 at 07:51:12AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Wednesday, February 17, 2016 3:32 PM
> >
> > On Wed, Feb 17, 2016 at 07:52:53AM +0100, Gerd Hoffmann wrote:
> > > Hi,
> > >
> > &g
On Wed, Feb 17, 2016 at 07:46:15AM +, Tian, Kevin wrote:
> > From: Neo Jia
> > Sent: Wednesday, February 17, 2016 3:26 PM
> >
> >
>
> >
> > If your most concern is having this kind of path doesn't provide enough
> > information of the virtu
fit is that having the UUID as part of the virtual vgpu device path
will
allow whoever is going to config the QEMU to automatically discover the virtual
device sysfs for free.
Thanks,
Neo
>
> cheers,
> Gerd
>
On Wed, Feb 17, 2016 at 06:02:36AM +, Tian, Kevin wrote:
> > From: Neo Jia
> > Sent: Wednesday, February 17, 2016 1:38 PM
> > > > >
> > > > >
> > > >
> > > > Hi Kevin,
> > > >
> > > > The answer is simp
ut getting worse with
> each iteration due to excessive quoting.
>
Hi Eric,
Sorry about that, I will pay attention to this.
Thanks,
Neo
> --
> Eric Blake eblake redhat com+1-919-301-3266
> Libvirt virtualization library http://libvirt.org
>
>
> * Unknown Key
> * 0x2527436A
On Wed, Feb 17, 2016 at 05:04:31AM +, Tian, Kevin wrote:
> > From: Neo Jia
> > Sent: Wednesday, February 17, 2016 12:18 PM
> >
> > On Wed, Feb 17, 2016 at 03:31:24AM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...@nvidia.com]
> > &g
On Wed, Feb 17, 2016 at 03:31:24AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, February 16, 2016 4:49 PM
> >
> > On Tue, Feb 16, 2016 at 08:10:42AM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...@nvidia.
On Tue, Feb 16, 2016 at 08:10:42AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, February 16, 2016 3:53 PM
> >
> > On Tue, Feb 16, 2016 at 07:40:47AM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...@nvidia.
On Tue, Feb 16, 2016 at 07:40:47AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, February 16, 2016 3:37 PM
> >
> > On Tue, Feb 16, 2016 at 07:27:09AM +, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:c...@nvidia.
On Tue, Feb 16, 2016 at 07:27:09AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, February 16, 2016 3:13 PM
> >
> > On Tue, Feb 16, 2016 at 06:49:30AM +, Tian, Kevin wrote:
> > > > From: Alex Williamson [mailto:alex
space policy whether that UUID has any implicit meaning like
> > matching the VM UUID. Having an index within a UUID bothers me a bit,
> > but it doesn't seem like too much of a concession to enable the use case
> > that NVIDIA is trying to achieve. Thanks,
> >
>
> I would prefer to making UUID an optional parameter, while not tieing
> sysfs vgpu naming to UUID. This would be more flexible to different
> scenarios where UUID might not be required.
Hi Kevin,
Happy Chinese New Year!
I think having UUID as the vgpu device name will allow us to have an gpu vendor
agnostic solution for the upper layer software stack such as QEMU, who is
supposed to open the device.
Thanks,
Neo
>
> Thanks
> Kevin
to
> > Xen, that's even better. Thanks,
> >
>
> Here is the main open in my head, after thinking about the role of VFIO:
>
> For above 7 services required by vGPU device model, they can fall into
> two categories:
>
> a) services to connect vGPU with VM
iscuss several remaining opens atop
> (such as exit-less emulation, pin/unpin, etc.). Another thing we need
> to think is whether this new design is still compatible to Xen side.
>
> Thanks a lot all for the great discussion (especially Alex with many good
> inputs)! I believe it becomes much clearer now than 2 weeks ago, about
> how to integrate KVMGT with VFIO. :-)
>
It is great to see you guys are onboard with VFIO solution! As Kirti has
mentioned in other threads, let's review the current registration APIs and
figure out what we need to add for both solutions.
Thanks,
Neo
> Thanks
> Kevin
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Feb 02, 2016 at 08:18:44AM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, February 02, 2016 4:13 PM
> >
> > On Tue, Feb 02, 2016 at 09:00:43AM +0100, Gerd Hoffmann wrote:
> > > Hi,
> > >
> > &g
it is just his comment gets lost in our previous long email thread.
Thanks,
Neo
>
> cheers,
> Gerd
>
On Wed, Jan 27, 2016 at 09:10:16AM -0700, Alex Williamson wrote:
> On Wed, 2016-01-27 at 01:14 -0800, Neo Jia wrote:
> > On Tue, Jan 26, 2016 at 04:30:38PM -0700, Alex Williamson wrote:
> > > On Tue, 2016-01-26 at 14:28 -0800, Neo Jia wrote:
> > > > On Tue, Jan 26,
On Tue, Jan 26, 2016 at 04:30:38PM -0700, Alex Williamson wrote:
> On Tue, 2016-01-26 at 14:28 -0800, Neo Jia wrote:
> > On Tue, Jan 26, 2016 at 01:06:13PM -0700, Alex Williamson wrote:
> > > > 1.1 Under per-
On Tue, Jan 26, 2016 at 01:06:13PM -0700, Alex Williamson wrote:
> On Tue, 2016-01-26 at 02:20 -0800, Neo Jia wrote:
> > On Mon, Jan 25, 2016 at 09:45:14PM +, Tian, Kevin wrote:
> > > > From: Alex Williamson [mailto:alex.william...@redhat.com]
> >
> > Hi Alex
Song wrote:
> > > > On 01/26/2016 05:30 AM, Alex Williamson wrote:
> > > > > [cc +Neo @Nvidia]
> > > > >
> > > > > Hi Jike,
> > > > >
> > > > > On Mon, 2016-01-25 at 19:34 +0800, Jike Song wrote:
> >
On Tue, Jan 26, 2016 at 07:24:52PM +, Tian, Kevin wrote:
> > From: Neo Jia [mailto:c...@nvidia.com]
> > Sent: Tuesday, January 26, 2016 6:21 PM
> >
> > 0. High level overview
> > =
> >
On Mon, Jan 25, 2016 at 09:45:14PM +, Tian, Kevin wrote:
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Tuesday, January 26, 2016 5:30 AM
> >
> > [cc +Neo @Nvidia]
> >
> > Hi Jike,
> >
> > On Mon, 2016-01-25 at 19:34 +
On Mon, Jan 25, 2016 at 09:45:14PM +, Tian, Kevin wrote:
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Tuesday, January 26, 2016 5:30 AM
> >
> > [cc +Neo @Nvidia]
> >
> > Hi Jike,
> >
> > On Mon, 2016-01-25 at 19:34 +
Hi Serge, you are right. I have reported this issue to redhat. They said that
RHEL 6.5 have removed this option.
https://bugzilla.redhat.com/show_bug.cgi?id=1027074
** Bug watch added: Red Hat Bugzilla #1027074
https://bugzilla.redhat.com/show_bug.cgi?id=1027074
** Summary changed:
- weath
Public bug reported:
In ubuntu os, my qemu-img version is 0.12.0 and its help message show that
"qemu-img convert" command supports "-s" option, but it disappeared in qemu-img
0.12.1. However, though it disappeared, it does support in RHEL6.4, and the rpm
version is qemu-img-0.12.1.2-2.355.el6.
kernel.org/msg21145.html).
Thanks,
Neo
--
I would remember that if researchers were not ambitious
probably today we haven't the technology we are using!
kernel.org/msg21145.html).
Thanks,
Neo
--
I would remember that if researchers were not ambitious
probably today we haven't the technology we are using!
Here is what I have asked before. The problem that I want to assign a
real serial port to the guest is that the debugging through network
becomes really slow.
Thanks,
Neo
On Thu, Mar 11, 2010 at 2:44 AM, Neo Jia wrote:
> hi,
>
> I have followed the windows guest debugging procedure fr
can't talk to it from my dev machine that has connected to ttyS1 with
target machine (host).
Is this a known problem?
Thanks,
Neo
--
I would remember that if researchers were not ambitious
probably today we haven't the technology we are using!
hi,
When I am trying to using kqemu on my IA32 linux, it throws out "Could
not initialize SDL -- exiting".
Could you help me to figure it out?
Thanks,
Neo
--
I would remember that if researchers were not ambitious
probably today we haven't the technology we are using!
/gdb/2004-03/msg1.html, it
seems that this signal is generated from the qemu instead of sent by
the bottom hardware.
So, I am wondering if there is anybody can point me to the code of
qemu, which will take care those signals.
Thanks,
Neo
--
I would remember that if researchers were not
On 4/25/07, Jan Kiszka <[EMAIL PROTECTED]> wrote:
Neo Jia wrote:
> On 4/25/07, Jan Kiszka <[EMAIL PROTECTED]> wrote:
>> Neo Jia wrote:
>> > On 4/25/07, Jan Kiszka <[EMAIL PROTECTED]> wrote:
>> >> Neo Jia wrote:
>> >> > On 4/25/0
1 - 100 of 118 matches
Mail list logo