On Wed, Oct 21, 2020 at 10:56:51AM +0200, Daniel Vetter wrote: > There's three ways to access PCI BARs from userspace: /dev/mem, sysfs > files, and the old proc interface. Two check against > iomem_is_exclusive, proc never did. And with CONFIG_IO_STRICT_DEVMEM, > this starts to matter, since we don't want random userspace having > access to PCI BARs while a driver is loaded and using it. > > Fix this by adding the same iomem_is_exclusive() check we already have > on the sysfs side in pci_mmap_resource(). > > References: 90a545e98126 ("restrict /dev/mem to idle io memory ranges") > Signed-off-by: Daniel Vetter <daniel.vet...@intel.com> > Cc: Jason Gunthorpe <j...@ziepe.ca> > Cc: Kees Cook <keesc...@chromium.org> > Cc: Dan Williams <dan.j.willi...@intel.com> > Cc: Andrew Morton <a...@linux-foundation.org> > Cc: John Hubbard <jhubb...@nvidia.com> > Cc: Jérôme Glisse <jgli...@redhat.com> > Cc: Jan Kara <j...@suse.cz> > Cc: Dan Williams <dan.j.willi...@intel.com> > Cc: linux...@kvack.org > Cc: linux-arm-ker...@lists.infradead.org > Cc: linux-samsung-...@vger.kernel.org > Cc: linux-me...@vger.kernel.org > Cc: Bjorn Helgaas <bhelg...@google.com> > Cc: linux-...@vger.kernel.org > Signed-off-by: Daniel Vetter <daniel.vet...@ffwll.com>
Maybe not for fixing in this series, but this access to IORESOURCE_BUSY doesn't have any locking. The write side holds the resource_lock at least.. > ret = pci_mmap_page_range(dev, i, vma, > fpriv->mmap_state, write_combine); At this point the vma isn't linked into the address space, so doesn't this happen? CPU 0 CPU1 mmap_region() vma = vm_area_alloc proc_bus_pci_mmap iomem_is_exclusive pci_mmap_page_range revoke_devmem unmap_mapping_range() // vma is not linked to the address space here, // unmap doesn't find it vma_link() !!! The VMA gets mapped with the revoked PTEs I couldn't find anything that prevents it at least, no mmap_sem on the unmap side, just the i_mmap_lock Not seeing how address space and pre-populating during mmap work together? Did I miss locking someplace? Not something to be fixed for this series, this is clearly an improvement, but seems like another problem to tackle? Jason _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel