s according
to the boot protocol.
>From ca9763668eed2eaaf0c0c2640f1502c22b68a739 Mon Sep 17 00:00:00 2001
From: Jason Gunthorpe
Date: Fri, 14 Sep 2012 11:27:17 -0600
Subject: [PATCH] [ARM] Use AT() in the linker script to create correct program
headers
The standard linux asm-generic/vmlinux.l
> > index 8756e4b..551e971 100644
> > +++ b/arch/arm/include/asm/memory.h
> > @@ -350,7 +350,7 @@ static inline __deprecated void *bus_to_virt(unsigned
> > long x)
> > #define virt_addr_valid(kaddr) (((unsigned long)(kaddr) >= PAGE_OFFSET
> > && (unsigned long)(kaddr) < (unsigned long)high_m
On Tue, Apr 22, 2014 at 10:44:14AM +0100, Daniel Thompson wrote:
> On 17/04/14 21:35, Jason Gunthorpe wrote:
> >>> The above is useful for loading the raw uncompressed Image without
> >>> carrying the full ELF baggage.
> >>
> >> What exactly i
On Tue, Apr 22, 2014 at 06:11:42PM +0100, Russell King - ARM Linux wrote:
> Put another way, if your platform is part of the multi-platform kernel
> then you are *excluded* from being able to use this... unless you hack
> the Kconfig, and then also provide a constant value for PHYS_OFFSET,
> there
On Thu, Mar 21, 2013 at 07:15:25PM +0200, Michael S. Tsirkin wrote:
> No because application does this:
> init page
>
> ...
>
> after a lot of time
>
> ..
>
> register
> send
> unregister
>
> so it can not be read only.
mprotect(READONLY)
register
send
unregister
mprotect(WRITABLE)
?
With
On Thu, Mar 21, 2013 at 11:39:47AM +0200, Michael S. Tsirkin wrote:
> On Thu, Mar 21, 2013 at 02:13:38AM -0700, Roland Dreier wrote:
> > On Thu, Mar 21, 2013 at 1:51 AM, Michael S. Tsirkin wrote:
> > >> In that case, no, I don't see any reason for LOCAL_WRITE, since the
> > >> only RDMA operations
On Thu, Mar 21, 2013 at 09:15:41PM +0200, Michael S. Tsirkin wrote:
> On Thu, Mar 21, 2013 at 12:41:35PM -0600, Jason Gunthorpe wrote:
> > On Thu, Mar 21, 2013 at 08:16:33PM +0200, Michael S. Tsirkin wrote:
> >
> > > This is the one I find redundant. Since the write w
On Thu, Mar 21, 2013 at 08:16:33PM +0200, Michael S. Tsirkin wrote:
> This is the one I find redundant. Since the write will be done by
> the adaptor under direct control by the application, why does it
> make sense to declare this beforehand? If you don't want to allow
> local write access to me
On Thu, Mar 21, 2013 at 07:42:37PM +0200, Michael S. Tsirkin wrote:
> It doesn't actually, and our app would sometimes write to these pages.
> It simply does not care which version does the remote get in this case
> since we track writes and resend later.
Heh, somehow I thought you might say that
On Thu, Nov 17, 2022 at 07:07:10PM +0200, Avihai Horon wrote:
> > > +}
> > > +
> > > +if (mig_state->data_fd != -1) {
> > > +if (migration->data_fd != -1) {
> > > +/*
> > > + * This can happen if the device is asynchronously reset and
> > > + * te
On Wed, Nov 23, 2022 at 09:42:36AM +0800, chenxiang via wrote:
> From: Xiang Chen
>
> Currently the number of MSI vectors comes from register PCI_MSI_FLAGS
> which should be power-of-2 in qemu, in some scenaries it is not the same as
> the number that driver requires in guest, for example, a PCI
On Sat, Nov 26, 2022 at 11:15:14AM +, Marc Zyngier wrote:
> > Physical hardware doesn't do this, virtual emulation shouldn't either.
>
> If you want to fix VFIO, be my guest. My rambling about the sorry
> state of this has been in the kernel for 5 years (ed8703a506a8).
We are talking about t
On Mon, Nov 28, 2022 at 11:50:03AM -0700, Alex Williamson wrote:
> There's a claim here about added complexity that I'm not really seeing.
> It looks like we simply make an ioctl call here and scale our buffer
> based on the minimum of the returned device estimate or our upper
> bound.
I'm not ke
On Mon, Nov 28, 2022 at 01:36:30PM -0700, Alex Williamson wrote:
> On Mon, 28 Nov 2022 15:40:23 -0400
> Jason Gunthorpe wrote:
>
> > On Mon, Nov 28, 2022 at 11:50:03AM -0700, Alex Williamson wrote:
> >
> > > There's a claim here about added complexity that I&
On Mon, Feb 12, 2024 at 01:56:37PM +, Joao Martins wrote:
> There's generally two modes of operation for IOMMUFD:
>
> * The simple user API which intends to perform relatively simple things
> with IOMMUs e.g. DPDK. It generally creates an IOAS and attach to VFIO
> and mainly performs IOAS_MAP
On Mon, Feb 12, 2024 at 01:56:41PM +, Joao Martins wrote:
> Allow disabling hugepages to be dirty track at base page
> granularity in similar vein to vfio_type1_iommu.disable_hugepages
> but per IOAS.
No objection to this, but I just wanted to observe I didn't imagine
using this option for thi
On Fri, May 03, 2024 at 04:04:25PM +0200, Cédric Le Goater wrote:
> However, have you considered another/complementary approach which
> would be to create an host IOMMU (iommufd) backend object and a vIOMMU
> device object together for each vfio-pci device being plugged in the
> machine ?
>
> Some
On Mon, May 06, 2024 at 02:30:47AM +, Duan, Zhenzhong wrote:
> I'm not clear how useful multiple iommufd instances support are.
> One possible benefit is for security? It may bring a slightly fine-grained
> isolation in kernel.
No. I don't think there is any usecase, it is only harmful.
Jaso
On Tue, May 07, 2024 at 02:24:30AM +, Duan, Zhenzhong wrote:
> >On Mon, May 06, 2024 at 02:30:47AM +, Duan, Zhenzhong wrote:
> >
> >> I'm not clear how useful multiple iommufd instances support are.
> >> One possible benefit is for security? It may bring a slightly fine-grained
> >> isolati
On Tue, Apr 30, 2019 at 08:13:54PM +0300, Yuval Shaia wrote:
> On Mon, Apr 22, 2019 at 01:45:27PM -0300, Jason Gunthorpe wrote:
> > On Fri, Apr 19, 2019 at 01:16:06PM +0200, Hannes Reinecke wrote:
> > > On 4/15/19 12:35 PM, Yuval Shaia wrote:
> > > > On Thu, Ap
On Tue, Jan 31, 2023 at 09:53:03PM +0100, Eric Auger wrote:
> From: Yi Liu
>
> Add the iommufd backend. The IOMMUFD container class is implemented
> based on the new /dev/iommu user API. This backend obviously depends
> on CONFIG_IOMMUFD.
>
> So far, the iommufd backend doesn't support live migr
On Tue, Jan 31, 2023 at 03:43:01PM -0700, Alex Williamson wrote:
> How does this affect our path towards supported migration? I'm
> thinking about a user experience where QEMU supports migration if
> device A OR device B are attached, but not devices A and B attached to
> the same VM. We might h
On Tue, Jan 31, 2023 at 09:15:03PM -0700, Alex Williamson wrote:
> > IMHO this is generally the way forward to do multi-device as well,
> > remove the MMIO from all the address maps: VFIO, SW access, all of
> > them. Nothing can touch MMIO except for the vCPU.
>
> Are you suggesting this relative
On Wed, Feb 01, 2023 at 11:42:46AM -0700, Alex Williamson wrote:
> > 'p2p off' is a valuable option in its own right because this stuff
> > doesn't work reliably and is actively dangerous. Did you know you can
> > hard crash the bare metal from a guest on some platforms with P2P
> > operations? Yi
On Fri, Nov 19, 2021 at 09:47:27PM +0800, Chao Peng wrote:
> From: "Kirill A. Shutemov"
>
> The new seal type provides semantics required for KVM guest private
> memory support. A file descriptor with the seal set is going to be used
> as source of guest memory in confidential computing environme
On Fri, Nov 19, 2021 at 04:39:15PM +0100, David Hildenbrand wrote:
> > If qmeu can put all the guest memory in a memfd and not map it, then
> > I'd also like to see that the IOMMU can use this interface too so we
> > can have VFIO working in this configuration.
>
> In QEMU we usually want to (and
On Fri, Nov 19, 2021 at 07:18:00PM +, Sean Christopherson wrote:
> On Fri, Nov 19, 2021, David Hildenbrand wrote:
> > On 19.11.21 16:19, Jason Gunthorpe wrote:
> > > As designed the above looks useful to import a memfd to a VFIO
> > > container but could you consid
On Fri, Nov 19, 2021 at 10:21:39PM +, Sean Christopherson wrote:
> On Fri, Nov 19, 2021, Jason Gunthorpe wrote:
> > On Fri, Nov 19, 2021 at 07:18:00PM +, Sean Christopherson wrote:
> > > No ideas for the kernel API, but that's also less concerning since
> >
On Sat, Nov 20, 2021 at 01:23:16AM +, Sean Christopherson wrote:
> On Fri, Nov 19, 2021, Jason Gunthorpe wrote:
> > On Fri, Nov 19, 2021 at 10:21:39PM +, Sean Christopherson wrote:
> > > On Fri, Nov 19, 2021, Jason Gunthorpe wrote:
> > > > On Fri, Nov 19,
On Mon, Nov 22, 2021 at 10:26:12AM +0100, David Hildenbrand wrote:
> I do wonder if we want to support sharing such memfds between processes
> in all cases ... we most certainly don't want to be able to share
> encrypted memory between VMs (I heard that the kernel has to forbid
> that). It would m
On Mon, Nov 22, 2021 at 02:35:49PM +0100, David Hildenbrand wrote:
> On 22.11.21 14:31, Jason Gunthorpe wrote:
> > On Mon, Nov 22, 2021 at 10:26:12AM +0100, David Hildenbrand wrote:
> >
> >> I do wonder if we want to support sharing such memfds between processes
>
On Mon, Nov 22, 2021 at 03:57:17PM +0100, David Hildenbrand wrote:
> On 22.11.21 15:01, Jason Gunthorpe wrote:
> > On Mon, Nov 22, 2021 at 02:35:49PM +0100, David Hildenbrand wrote:
> >> On 22.11.21 14:31, Jason Gunthorpe wrote:
> >>> On Mon, Nov 22, 2021 at 10:26
On Tue, Nov 23, 2021 at 10:06:02AM +0100, Paolo Bonzini wrote:
> I think it's great that memfd hooks are usable by more than one subsystem,
> OTOH it's fair that whoever needs it does the work---and VFIO does not need
> it for confidential VMs, yet, so it should be fine for now to have a single
>
On Tue, May 16, 2023 at 10:03:54AM +, Shameerali Kolothum Thodi wrote:
> > Currently VFIO migration doesn't implement some kind of intermediate
> > quiescent state in which P2P DMAs are quiesced before stopping or
> > running the device. This can cause problems in multi-device migration
> > wh
On Tue, May 16, 2023 at 01:57:22PM +, Shameerali Kolothum Thodi wrote:
> > What happens on your platform if a guest tries to do P2P? Does the
> > platform crash?
>
> I am not sure. Since the devices are behind SMMU, I was under the assumption
> that we do have the guarantee of isolation here(
On Tue, May 16, 2023 at 02:35:21PM +, Shameerali Kolothum Thodi wrote:
> Ok. Got it. So it depends on what SMMU does for that mapping and is not
> related to migration per se and has the potential to crash the system if
> SMMU go ahead with that memory access. Isn't it a more generic problem
On Fri, Jun 17, 2022 at 03:51:29PM -0600, Alex Williamson wrote:
> It's ok by me if QEMU vfio is the one that marks all mapped pages dirty
> if the host interface provides no way to do so. Would we toggle that
> based on whether the device has bus-master enabled?
I don't think so, that is a very
On Thu, May 18, 2023 at 10:16:24AM -0400, Peter Xu wrote:
> What you mentioned above makes sense to me from the POV that 1 vIOMMU may
> not suffice, but that's at least totally new area to me because I never
> used >1 IOMMUs even bare metal (excluding the case where I'm aware that
> e.g. a GPU cou
On Thu, May 18, 2023 at 03:45:24PM -0400, Peter Xu wrote:
> On Thu, May 18, 2023 at 11:56:46AM -0300, Jason Gunthorpe wrote:
> > On Thu, May 18, 2023 at 10:16:24AM -0400, Peter Xu wrote:
> >
> > > What you mentioned above makes sense to me from the POV that 1 vIOMMU ma
On Fri, May 26, 2023 at 08:44:29AM +, Liu, Yi L wrote:
> > > >> In fact, the other purpose of this patch is to eliminate noisy error
> > > >> log when we work with IOMMUFD. It looks the duplicate UNMAP call will
> > > >> fail with IOMMUFD while always succeed with legacy container. This
> > >
On Tue, May 10, 2022 at 08:35:00PM +0800, Zhangfei Gao wrote:
> Thanks Yi and Eric,
> Then will wait for the updated iommufd kernel for the PCI MMIO region.
>
> Another question,
> How to get the iommu_domain in the ioctl.
The ID of the iommu_domain (called the hwpt) it should be returned by
the
On Thu, May 12, 2022 at 11:57:10AM -0600, Alex Williamson wrote:
> > @@ -767,9 +767,10 @@ static void vfio_migration_state_notifier(Notifier
> > *notifier, void *data)
> > case MIGRATION_STATUS_CANCELLED:
> > case MIGRATION_STATUS_FAILED:
> > bytes_transferred = 0;
> > -
On Thu, May 12, 2022 at 03:11:40PM -0600, Alex Williamson wrote:
> On Thu, 12 May 2022 15:25:32 -0300
> Jason Gunthorpe wrote:
>
> > On Thu, May 12, 2022 at 11:57:10AM -0600, Alex Williamson wrote:
> > > > @@ -767,9 +767,10 @@ static void vfio_migration_state_notifier(
On Mon, May 16, 2022 at 02:22:00PM -0600, Alex Williamson wrote:
> On Mon, 16 May 2022 13:22:14 +0200
> Juan Quintela wrote:
>
> > Avihai Horon wrote:
> > > Currently, if IOMMU of a VFIO container doesn't support dirty page
> > > tracking, migration is blocked completely. This is because a DMA-a
On Tue, May 17, 2022 at 10:00:45AM -0600, Alex Williamson wrote:
> > This is really intended to be a NOP from where things are now, as if
> > you use mlx5 live migration without a patch like this then it causes a
> > botched pre-copy since everything just ends up permanently dirty.
> >
> > If it
On Tue, May 17, 2022 at 11:22:32AM -0600, Alex Williamson wrote:
> > > It seems like a better solution would be to expose to management
> > > tools that the VM contains a device that does not support the
> > > pre-copy phase so that downtime expectations can be adjusted.
> >
> > I don't expect
On Wed, May 18, 2022 at 01:54:34PM +0200, Juan Quintela wrote:
> >> Is there a really performance difference to just use:
> >>
> >> uint8_t buffer[size];
> >>
> >> qemu_get_buffer(f, buffer, size);
> >> write(fd, buffer, size);
> >>
> >> Or telling it otherwise, what sizes are we talking here?
> >
On Wed, May 18, 2022 at 01:39:31PM +0200, Juan Quintela wrote:
> > That does seem like a defect in this patch, any SLA constraints should
> > still all be checked under the assumption all ram is dirty.
>
> And how are we going to:
> - detect the network link speed
> - to be sure that we are insid
On Wed, May 18, 2022 at 05:00:26PM +0100, Daniel P. Berrangé wrote:
> On Wed, May 18, 2022 at 12:42:37PM -0300, Jason Gunthorpe wrote:
> > On Wed, May 18, 2022 at 01:54:34PM +0200, Juan Quintela wrote:
> >
> > > >> Is there a really performance difference to just
On Tue, May 17, 2022 at 09:46:56PM -0600, Alex Williamson wrote:
> The current solution is obviously non-optimal, it was mainly
> meant for backwards compatibility, but this seems like a fail faster
> solution, with less useless work, but also providing less indication
> how to configure the migra
On Wed, Feb 15, 2023 at 01:14:35PM -0700, Alex Williamson wrote:
> We'll need to consider whether we want to keep "dumb" dirty tracking,
> or even any form of dirty tracking in the type1 uAPI, under an
> experimental opt-in. Thanks,
I was expecting we'd delete the kernel code for type 1 dirty tr
On Tue, Jan 31, 2023 at 09:53:05PM +0100, Eric Auger wrote:
> Now we support two types of iommu backends, let's add the capability
> to select one of them. This depends on whether an iommufd object has
> been linked with the vfio-pci device:
>
> if the user wants to use the legacy backend, it shal
On Tue, Jan 31, 2023 at 09:52:47PM +0100, Eric Auger wrote:
> Given some iommufd kernel limitations, the iommufd backend is
> not yuet fully on par with the legacy backend w.r.t. features like:
> - p2p mappings (you will see related error traces)
> - coherency tracking
You said this was a qemu sid
On Fri, Feb 03, 2023 at 06:57:02PM +0100, Eric Auger wrote:
> Hi Jason,
>
> On 2/3/23 13:51, Jason Gunthorpe wrote:
> > On Tue, Jan 31, 2023 at 09:53:05PM +0100, Eric Auger wrote:
> >> Now we support two types of iommu backends, let's add the capability
> >>
On Wed, Feb 22, 2023 at 03:40:43PM -0700, Alex Williamson wrote:
> > +/*
> > + * DMA logging uAPI guarantees to support at least num_ranges that
> > fits into
> > + * a single host kernel page. To be on the safe side, use this as a
> > limit
> > + * from which to merge to a single
On Wed, Feb 22, 2023 at 04:34:39PM -0700, Alex Williamson wrote:
> > +/*
> > + * With vIOMMU we try to track the entire IOVA space. As the IOVA
> > space can
> > + * be rather big, devices might not be able to track it due to HW
> > + * limitations. In that case:
> > + * (1) Re
On Thu, Feb 23, 2023 at 12:27:23PM -0700, Alex Williamson wrote:
> So again, I think I'm just looking for a better comment that doesn't
> add FUD to the reasoning behind switching to a single range,
It isn't a single range, it is a single page of ranges, right?
The comment should say
"Keep the
On Thu, Feb 23, 2023 at 01:16:40PM -0700, Alex Williamson wrote:
> On Thu, 23 Feb 2023 15:30:28 -0400
> Jason Gunthorpe wrote:
>
> > On Thu, Feb 23, 2023 at 12:27:23PM -0700, Alex Williamson wrote:
> > > So again, I think I'm just looking for a better comment that
On Thu, Feb 23, 2023 at 01:06:33PM -0700, Alex Williamson wrote:
> > #2 is the presumption that the guest is using an identity map.
>
> This is a dangerous assumption.
>
> > > I'd think the only viable fallback if the vIOMMU doesn't report its max
> > > IOVA is the full 64-bit address space, othe
On Thu, Feb 23, 2023 at 03:33:09PM -0700, Alex Williamson wrote:
> On Thu, 23 Feb 2023 16:55:54 -0400
> Jason Gunthorpe wrote:
>
> > On Thu, Feb 23, 2023 at 01:06:33PM -0700, Alex Williamson wrote:
> > > > #2 is the presumption that the guest is using an identity ma
On Fri, Feb 24, 2023 at 12:53:26PM +, Joao Martins wrote:
> > But reading the code this ::bypass_iommu (new to me) apparently tells that
> > vIOMMU is bypassed or not for the PCI devices all the way to avoiding
> > enumerating in the IVRS/DMAR ACPI tables. And I see VFIO double-checks
> > whet
On Mon, Feb 27, 2023 at 09:14:44AM -0700, Alex Williamson wrote:
> But we have no requirement to send all init_bytes before stop-copy.
> This is a hack to achieve a theoretical benefit that a driver might be
> able to improve the latency on the target by completing another
> iteration.
I think th
On Wed, Mar 01, 2023 at 12:55:59PM -0700, Alex Williamson wrote:
> So it seems like what we need here is both a preface buffer size and a
> target device latency. The QEMU pre-copy algorithm should factor both
> the remaining data size and the device latency into deciding when to
> transition to
On Fri, Jan 06, 2023 at 04:36:09PM -0700, Alex Williamson wrote:
> Missing from the series is the all important question of what happens
> to "x-enable-migration" now. We have two in-kernel drivers supporting
> v2 migration, so while hardware and firmware may still be difficult to
> bring togethe
On Mon, Jan 09, 2023 at 06:27:21PM +0100, Cédric Le Goater wrote:
> also, in vfio_migration_query_flags() :
>
> +static int vfio_migration_query_flags(VFIODevice *vbasedev, uint64_t
> *mig_flags)
> +{
> +uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature) +
> +
On Mon, Aug 12, 2024 at 11:00:40AM -0600, Alex Williamson wrote:
> These devices have an embedded interrupt controller which is programmed
> with guest physical MSI address/data, which doesn't work. We need
> vfio-pci kernel support to provide a device feature which disables
> virtualization of th
On Tue, Aug 13, 2024 at 03:03:20PM -0600, Alex Williamson wrote:
> How does the guest know to write a remappable vector format? How does
> the guest know the host interrupt architecture? For example why would
> an aarch64 guest program an MSI vector of 0xfee... if the host is x86?
All excellent
On Thu, Aug 15, 2024 at 10:59:05AM -0600, Alex Williamson wrote:
> > This is probably the only way to approach this, trap and emulate the
> > places in the device that program additional interrupt sources and do
> > a full MSI-like flow to set them up in the kernel.
>
> Your last sentence here se
On Mon, May 30, 2022 at 08:07:35PM +0300, Avihai Horon wrote:
> +/* Returns 1 if end-of-stream is reached, 0 if more data and -1 if error */
> +static int vfio_save_block(QEMUFile *f, VFIOMigration *migration)
> +{
> +ssize_t data_size;
> +
> +data_size = read(migration->data_fd, migration
On Fri, Apr 08, 2022 at 08:54:02PM +0200, David Hildenbrand wrote:
> RLIMIT_MEMLOCK was the obvious candidate, but as we discovered int he
> past already with secretmem, it's not 100% that good of a fit (unmovable
> is worth than mlocked). But it gets the job done for now at least.
No, it doesn't
On Wed, Apr 13, 2022 at 06:24:56PM +0200, David Hildenbrand wrote:
> On 12.04.22 16:36, Jason Gunthorpe wrote:
> > On Fri, Apr 08, 2022 at 08:54:02PM +0200, David Hildenbrand wrote:
> >
> >> RLIMIT_MEMLOCK was the obvious candidate, but as we discovered int he
> >
On Thu, Oct 13, 2022 at 01:25:10PM +0100, Joao Martins wrote:
> It would allow supporting both the (current UAPI) case where you need to
> transfer the state to get device state size (so checking against
> threshold_size
> pending_pre constantly would allow to not violate the SLA) as well as any
On Fri, Oct 14, 2022 at 01:29:51PM +0100, Joao Martins wrote:
> On 14/10/2022 12:28, Juan Quintela wrote:
> > Joao Martins wrote:
> >> On 13/10/2022 17:08, Juan Quintela wrote:
> >>> Oops. My understanding was that once the guest is stopped you can say
> >>> how big is it.
> >
> > Hi
> >
> >>
On Thu, Apr 14, 2022 at 03:47:07AM -0700, Yi Liu wrote:
> +static int vfio_get_devicefd(const char *sysfs_path, Error **errp)
> +{
> +long int vfio_id = -1, ret = -ENOTTY;
> +char *path, *tmp = NULL;
> +DIR *dir;
> +struct dirent *dent;
> +struct stat st;
> +gchar *contents
On Mon, Apr 25, 2022 at 11:10:14AM +0100, Daniel P. Berrangé wrote:
> > However, with iommufd there's no reason that QEMU ever needs more than
> > a single instance of /dev/iommufd and we're using per device vfio file
> > descriptors, so it seems like a good time to revisit this.
>
> I assume acc
On Tue, Apr 26, 2022 at 08:37:41AM +, Tian, Kevin wrote:
> Based on current plan there is probably a transition window between the
> point where the first vfio device type (vfio-pci) gaining iommufd support
> and the point where all vfio types supporting iommufd.
I am still hoping to do all
On Tue, Apr 26, 2022 at 10:41:01AM +, Tian, Kevin wrote:
> That's one case of incompatibility, but the IOMMU attach group callback
> can fail in a variety of ways. One that we've seen that is not
> uncommon is that we might have an mdev container with various mappings
> to other devices.
On Tue, Apr 26, 2022 at 05:55:29PM +0800, Yi Liu wrote:
> > I also suggest falling back to using "/dev/char/%u:%u" if the above
> > does not exist which prevents "vfio/devices/vfio" from turning into
> > ABI.
>
> do you mean there is no matched file under /dev/vfio/devices/? Is this
> possible?
T
On Tue, Apr 26, 2022 at 10:08:30PM +0800, Yi Liu wrote:
> > I think it is strange that the allowed DMA a guest can do depends on
> > the order how devices are plugged into the guest, and varys from
> > device to device?
> >
> > IMHO it would be nicer if qemu would be able to read the new reserved
On Tue, Apr 26, 2022 at 10:21:59AM -0600, Alex Williamson wrote:
> We also need to be able to advise libvirt as to how each iommufd object
> or user of that object factors into the VM locked memory requirement.
> When used by vfio-pci, we're only mapping VM RAM, so we'd ask libvirt
> to set the loc
On Tue, Apr 26, 2022 at 01:24:35PM -0600, Alex Williamson wrote:
> On Tue, 26 Apr 2022 13:42:17 -0300
> Jason Gunthorpe wrote:
>
> > On Tue, Apr 26, 2022 at 10:21:59AM -0600, Alex Williamson wrote:
> > > We also need to be able to advise libvirt as to how each iommufd
On Tue, Apr 26, 2022 at 12:45:41PM -0600, Alex Williamson wrote:
> On Tue, 26 Apr 2022 11:11:56 -0300
> Jason Gunthorpe wrote:
>
> > On Tue, Apr 26, 2022 at 10:08:30PM +0800, Yi Liu wrote:
> >
> > > > I think it is strange that the allowed DMA a guest can do
On Tue, Apr 26, 2022 at 02:59:31PM -0600, Alex Williamson wrote:
> > The best you could do is make a dummy IOAS then attach the device,
> > read the mappings, detatch, and then do your unmaps.
>
> Right, the same thing the kernel does currently.
>
> > I'm imagining something like IOMMUFD_DEVICE_
On Mon, Sep 25, 2023 at 03:53:51PM +0100, Jonathan Cameron wrote:
> On Mon, 25 Sep 2023 11:03:28 -0300
> Jason Gunthorpe wrote:
>
> > On Mon, Sep 25, 2023 at 02:54:40PM +0100, Jonathan Cameron wrote:
> >
> > > Possible the ASWG folk would say this is fine and I&
On Wed, Sep 27, 2023 at 12:33:18PM +0100, Jonathan Cameron wrote:
> CXL accelerators / GPUs etc are a different question but who has one
> of those anyway? :)
That's exactly what I mean when I say CXL will need it too. I keep
describing this current Grace & Hopper as pre-CXL HW. You can easially
On Wed, Sep 27, 2023 at 03:03:09PM +, Vikram Sethi wrote:
> > From: Alex Williamson
> > Sent: Wednesday, September 27, 2023 9:25 AM
> > To: Jason Gunthorpe
> > Cc: Jonathan Cameron ; Ankit Agrawal
> > ; David Hildenbrand ; Cédric Le
> > Goater ; shan
On Fri, Sep 15, 2023 at 02:42:48PM +0200, Cédric Le Goater wrote:
> On 8/30/23 12:37, Zhenzhong Duan wrote:
> > Hi All,
> >
> > As the kernel side iommufd cdev and hot reset feature have been queued,
> > also hwpt alloc has been added in Jason's for_next branch [1], I'd like
> > to update a new ve
On Mon, Sep 18, 2023 at 02:23:48PM +0200, Cédric Le Goater wrote:
> On 9/18/23 13:51, Jason Gunthorpe wrote:
> > On Fri, Sep 15, 2023 at 02:42:48PM +0200, Cédric Le Goater wrote:
> > > On 8/30/23 12:37, Zhenzhong Duan wrote:
> > > > Hi All,
> > > >
>
On Wed, Sep 20, 2023 at 02:01:39PM +0100, Daniel P. Berrangé wrote:
> Assuming we must have the exact same FD used for all vfio-pci devices,
> then using -object iommufd is the least worst way to get that FD
> injected into QEMU from libvirt.
Yes, same FD. It is a shared resource.
Jason
On Wed, Sep 20, 2023 at 02:19:42PM +0200, Cédric Le Goater wrote:
> On 9/20/23 05:42, Duan, Zhenzhong wrote:
> >
> >
> > > -Original Message-
> > > From: Cédric Le Goater
> > > Sent: Wednesday, September 20, 2023 1:08 AM
> > > Subject: Re: [PATCH v1 15/22] Add iommufd configure option
>
On Wed, Sep 20, 2023 at 01:39:02PM +0100, Daniel P. Berrangé wrote:
> > diff --git a/util/chardev_open.c b/util/chardev_open.c
> > new file mode 100644
> > index 00..d03e415131
> > --- /dev/null
> > +++ b/util/chardev_open.c
> > @@ -0,0 +1,61 @@
> > +/*
> > + * Copyright (C) 2023 Intel Cor
On Wed, Sep 20, 2023 at 07:37:53PM +0200, Eric Auger wrote:
> >> qemu will typically not be able to
> >> self-open /dev/iommufd as it is root-only.
> >
> > I don't understand, we open multiple fds to KVM devices. This is the
> > same.
> Actually qemu opens the /dev/iommu in case no fd is passed al
On Wed, Sep 20, 2023 at 12:01:42PM -0600, Alex Williamson wrote:
> On Wed, 20 Sep 2023 03:42:20 +
> "Duan, Zhenzhong" wrote:
>
> > >-Original Message-
> > >From: Cédric Le Goater
> > >Sent: Wednesday, September 20, 2023 1:08 AM
> > >Subject: Re: [PATCH v1 15/22] Add iommufd configure
On Wed, Sep 20, 2023 at 12:17:24PM -0600, Alex Williamson wrote:
> > The iommufd design requires one open of the /dev/iommu to be shared
> > across all the vfios.
>
> "requires"? It's certainly of limited value to have multiple iommufd
> instances rather than create multiple address spaces withi
On Mon, Sep 25, 2023 at 02:54:40PM +0100, Jonathan Cameron wrote:
> Possible the ASWG folk would say this is fine and I'm reading too much into
> the spec, but I'd definitely suggest asking them via the appropriate path,
> or throwing in a code first proposal for a comment on this special case and
On Tue, Aug 08, 2023 at 09:23:09AM +0300, Avihai Horon wrote:
>
> On 07/08/2023 18:53, Cédric Le Goater wrote:
> > External email: Use caution opening links or attachments
> >
> >
> > [ Adding Juan and Peter for their awareness ]
> >
> > On 8/2/23 10:14, Avihai Horon wrote:
> > > Changing the d
On Sun, Jul 16, 2023 at 11:15:35AM +0300, Avihai Horon wrote:
> Hi all,
>
> The first patch in this series adds a small optimization to VFIO
> migration by moving the STOP_COPY->STOP transition to
> vfio_save_cleanup(). Testing with a ConnectX-7 VFIO device showed that
> this can reduce downtime b
On Mon, Jun 26, 2023 at 05:26:42PM +0200, Cédric Le Goater wrote:
> Since dirty tracking is a must-have to implement migration support
> for any existing and future VFIO PCI variant driver, anything else
> would be experimental code and we are trying to remove the flag !
> Please correct me if I a
On Tue, Jun 27, 2023 at 02:21:55PM +0200, Cédric Le Goater wrote:
> We have a way to run and migrate a machine with a device not supporting
> dirty tracking. Only Hisilicon is in that case today. May be there are
> plans to add dirty tracking support ?
Hisilicon will eventually use Joao's work fo
On Thu, Jun 08, 2023 at 10:05:08AM -0400, Peter Xu wrote:
> IIUC what VFIO does here is it returns succeed if unmap over nothing rather
> than failing like iommufd. Curious (like JasonW) on why that retval? I'd
> assume for returning "how much unmapped" we can at least still return 0 for
> nothi
1 - 100 of 164 matches
Mail list logo