On 2024-10-30 12:46, Bjorn Helgaas wrote:
> On Fri, Oct 25, 2024 at 06:57:37AM +, Kasireddy, Vivek wrote:
> In the PCIe world, I don't think a TLP can "loop back" to another
> function on the same device.
I'm not sure if the spec says anything that specifically denies this.
But it seems to
On 2024-10-23 23:50, Kasireddy, Vivek wrote:
>> I'd echo many of Bjorn's concerns. In addition, I think the name of the
>> pci_devs_are_p2pdma_compatible() isn't quite right. Specifically this is
>> dealing with PCI functions within a single device that are known to
>> allow P2P traffic. So I th
Intel GPU functions
>> as they are P2PDMA compatible given the way the PF provisions
>> the resources among multiple VFs.
>
> I don't want version history in the commit log. If the content is
> useful, just incorporate it here directly (without the version info),
&
On 2024-10-15 23:29, Kasireddy, Vivek wrote:
> I think it would make sense to limit the passing criteria for device
> functions'
> compatibility to Intel GPUs for now. These are the devices I am currently
> testing that we know are P2P compatible. Would this be OK?
Yes, this sounds good to me.
On 2024-10-11 20:40, Vivek Kasireddy wrote:
> Functions of the same PCI device (such as a PF and a VF) share the
> same bus and have a common root port and typically, the PF provisions
> resources for the VF. Therefore, they can be considered compatible
> as far as P2P access is considered.
>
>
r.
Reviewed-by: Logan Gunthorpe
> ---
> mm/memremap.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memremap.c b/mm/memremap.c
> index fef5734d5e4933..e00ffcdba7b632 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -452,7 +
| 49 ++--
> 23 files changed, 90 insertions(+), 157 deletions(-)
Looks good to me. I was wondering about the location of some of this
code, so it's nice to see it cleaned up. Except for the one minor issue
I noted on patch 6, it all looks good to me. I've reviewed all the
patches and tested the series under my p2pdma series.
Reviewed-by: Logan Gunthorpe
Logan
On 2022-02-06 11:32 p.m., Christoph Hellwig wrote:
> Move the check for the actual pgmap types that need the free at refcount
> one behavior into the out of line helper, and thus avoid the need to
> pull memremap.h into mm.h.
>
> Signed-off-by: Christoph Hellwig
I've noticed mm/memcontrol.c u
On 2022-01-11 4:02 p.m., Jason Gunthorpe wrote:
> On Tue, Jan 11, 2022 at 03:57:07PM -0700, Logan Gunthorpe wrote:
>>
>>
>> On 2022-01-11 3:53 p.m., Jason Gunthorpe wrote:
>>> I just want to share the whole API that will have to exist to
>>> reasonably s
On 2022-01-11 3:57 p.m., Jason Gunthorpe wrote:
> On Tue, Jan 11, 2022 at 03:09:13PM -0700, Logan Gunthorpe wrote:
>
>> Either that, or we need a wrapper that allocates an appropriately
>> sized SGL to pass to any dma_map implementation that doesn't support
>> the
On 2022-01-11 3:53 p.m., Jason Gunthorpe wrote:
> I just want to share the whole API that will have to exist to
> reasonably support this flexible array of intervals data structure..
Is that really worth it? I feel like type safety justifies replicating a
bit of iteration and allocation infrast
On 2022-01-11 2:25 p.m., Matthew Wilcox wrote:
> That's reproducing the bad decision of the scatterlist, only with
> a different encoding. You end up with something like:
>
> struct neoscat {
> dma_addr_t dma_addr;
> phys_addr_t phys_addr;
> size_t dma_len;
> size_t phy
On 2022-01-11 1:17 a.m., John Hubbard wrote:
> On 1/10/22 11:34, Matthew Wilcox wrote:
>> TLDR: I want to introduce a new data type:
>>
>> struct phyr {
>> phys_addr_t addr;
>> size_t len;
>> };
>>
>> and use it to replace bio_vec as well as using it to replace the array
>> of
On 2020-12-10 12:25 p.m., Thomas Gleixner wrote:
> Use the proper core function.
>
> Signed-off-by: Thomas Gleixner
> Cc: Jon Mason
> Cc: Dave Jiang
> Cc: Allen Hubbe
> Cc: linux-...@googlegroups.com
Looks good to me.
Reviewed-by: Logan Gunthorpe
> ---
>
On 2020-09-08 9:28 a.m., Tvrtko Ursulin wrote:
>>
>> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
>> b/drivers/gpu/drm/i915/i915
>> index b7b59328cb76..9367ac801f0c 100644
>> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
>> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
>> @@ -27,13 +27,19
On 2020-08-23 6:04 p.m., Tom Murphy wrote:
> I have added a check for the sg_dma_len == 0 :
> """
> } __sgt_iter(struct scatterlist *sgl, bool dma) {
> struct sgt_iter s = { .sgp = sgl };
>
> + if (sgl && sg_dma_len(sgl) == 0)
> + s.sgp = NULL;
>
> if (s.sgp) {
this purpose.
>
> Signed-off-by: Piotr Stankiewicz
> Suggested-by: Andy Shevchenko
> Reviewed-by: Andy Shevchenko
Looks good to me,
Reviewed-by: Logan Gunthorpe
Thanks,
Logan
___
dri-devel mailing list
dri-devel@lis
On 2020-05-29 3:11 p.m., Marek Szyprowski wrote:
> Patches are pending:
> https://lore.kernel.org/linux-iommu/20200513132114.6046-1-m.szyprow...@samsung.com/T/
Cool, nice! Though, I still don't think that fixes the issue in
i915_scatterlist.h given it still ignores sg_dma_len() and strictly
rel
On 2020-05-29 6:45 a.m., Christoph Hellwig wrote:
> On Thu, May 28, 2020 at 06:00:44PM -0600, Logan Gunthorpe wrote:
>>> This issue is most likely in the i915 driver and is most likely caused by
>>> the driver not respecting the return value of the dma_map_ops::map_sg
>
found no issues. I hope this can make progress
soon and get merged soon. If you like you can add:
Tested-By: Logan Gunthorpe
> While converting the driver I exposed a bug in the intel i915 driver which
> causes a huge amount of artifacts on the screen of my laptop. You can see a
> p
On 2020-03-12 8:19 a.m., Jason Gunthorpe wrote:
> On Thu, Mar 12, 2020 at 03:47:29AM -0700, Christoph Hellwig wrote:
>> On Thu, Mar 12, 2020 at 11:31:35AM +0100, Christian König wrote:
>>> But how should we then deal with all the existing interfaces which already
>>> take a scatterlist/sg_table ?
On 2019-06-26 6:27 a.m., Christoph Hellwig wrote:
> The functionality is identical to the one currently open coded in
> p2pdma.c.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Logan Gunthorpe
Also, for the P2PDMA changes in this series:
Tested-by: Logan Gunthorpe
I'v
On 2019-06-17 6:27 a.m., Christoph Hellwig wrote:
> diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
> index a98126ad9c3a..e083567d26ef 100644
> --- a/drivers/pci/p2pdma.c
> +++ b/drivers/pci/p2pdma.c
> @@ -100,7 +100,7 @@ static void pci_p2pdma_percpu_cleanup(struct percpu_ref
> *ref)
>
On 2019-06-17 6:27 a.m., Christoph Hellwig wrote:
> The functionality is identical to the one currently open coded in
> p2pdma.c.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Logan Gunthorpe
I also did a quick test with the full patch-set to ensure that the setup
and tea
On 2019-06-13 12:27 p.m., Dan Williams wrote:
> On Thu, Jun 13, 2019 at 2:43 AM Christoph Hellwig wrote:
>>
>> Hi Dan, Jérôme and Jason,
>>
>> below is a series that cleans up the dev_pagemap interface so that
>> it is more easily usable, which removes the need to wrap it in hmm
>> and thus allo
On 2019-06-13 3:43 a.m., Christoph Hellwig wrote:
> Passing the actual typed structure leads to more understandable code
> vs the actual references.
Ha, ok, I originally suggested this to Dan when he introduced the
callback[1].
Reviewed-by: Logan Gunthorpe
Logan
[1]
https://lore.kern
On 2019-06-13 2:21 p.m., Dan Williams wrote:
> On Thu, Jun 13, 2019 at 1:18 PM Logan Gunthorpe wrote:
>>
>>
>>
>> On 2019-06-13 12:27 p.m., Dan Williams wrote:
>>> On Thu, Jun 13, 2019 at 2:43 AM Christoph Hellwig wrote:
>>>>
>>>>
| 12 ++--
> mm/hmm.c | 8 ++--
> tools/testing/nvdimm/test/iomap.c | 2 +-
> 7 files changed, 50 insertions(+), 30 deletions(-)
Looks good to me,
Reviewed-by: Logan Gunthorpe
Logan
___
dri-devel mai
On 2019-05-14 6:14 p.m., Frank Rowand wrote:
> The high level issue is to provide reviewers with enough context to be
> able to evaluate the patch series. That is probably not very obvious
> at this point in the thread. At this point I was responding to Logan's
> response to me that I should be
On 2019-05-03 12:48 a.m., Brendan Higgins wrote:
> On Thu, May 2, 2019 at 8:15 PM Logan Gunthorpe wrote:
>> On 2019-05-01 5:01 p.m., Brendan Higgins wrote:
>>> +/*
>>> + * struct kunit_try_catch - provides a generic way to run code which might
>>> fail.
>
On 2019-05-01 5:01 p.m., Brendan Higgins wrote:
> +/*
> + * struct kunit_try_catch - provides a generic way to run code which might
> fail.
> + * @context: used to pass user data to the try and catch functions.
> + *
> + * kunit_try_catch provides a generic, architecture independent way to
> ex
f using UML
(ie. not being able to compile large swaths of the tree due to features
that don't exist in that arch) but these are concerns for later.
I'd prefer to see the unnecessary indirection that I pointed out in
patch 8 cleaned up but, besides that, the code looks good to me.
Rev
On 2019-03-21 4:07 p.m., Brendan Higgins wrote:
> A couple of points, as for needing CONFIG_PCI; my plan to deal with
> that type of thing has been that we would add support for a KUnit/UML
> version that is just for KUnit. It would mock out the necessary bits
> to provide a fake hardware impleme
On 2019-03-20 11:23 p.m., Knut Omang wrote:
> Testing drivers, hardware and firmware within production kernels was the use
> case that inspired KTF (Kernel Test Framework). Currently KTF is available as
> a
> standalone git repository. That's been the most efficient form for us so far,
> as we
On 2019-03-21 1:13 p.m., Knut Omang wrote:
>> Nevertheless, I don't really see KTF as a real unit testing framework
>> for a number of different reasons; you pointed out some below, but I
>> think the main one being that it requires booting a real kernel on
>> actual hardware;
>
> That depends
Hi,
On 2019-02-14 2:37 p.m., Brendan Higgins wrote:
> This patch set proposes KUnit, a lightweight unit testing and mocking
> framework for the Linux kernel.
I haven't followed the entire conversation but I saw the KUnit write-up
on LWN and ended up, as an exercise, giving it a try.
I really lik
On 2019-01-31 12:35 p.m., Jerome Glisse wrote:
> So what is this O_DIRECT thing that keep coming again and again here :)
> What is the use case ? Note that bio will always have valid struct page
> of regular memory as using PCIE BAR for filesystem is crazy (you do not
> have atomic or cache coher
On 2019-01-31 12:02 p.m., Jason Gunthorpe wrote:
> I still think the right direction is to build on what Logan has done -
> realize that he created a DMA-only SGL - make that a formal type of
> the kernel and provide the right set of APIs to work with this type,
> without being forced to expose s
On 2019-01-30 10:26 a.m., Christoph Hellwig wrote:
> On Wed, Jan 30, 2019 at 10:55:43AM -0500, Jerome Glisse wrote:
>> Even outside GPU driver, device driver like RDMA just want to share their
>> doorbell to other device and they do not want to see those doorbell page
>> use in direct I/O or anyt
On 2019-01-30 12:38 p.m., Jason Gunthorpe wrote:
> On Wed, Jan 30, 2019 at 02:22:34PM -0500, Jerome Glisse wrote:
>
>> For GPU it would not work, GPU might want to use main memory (because
>> it is running out of BAR space) it is a lot easier if the p2p_map
>> callback calls the right dma map fu
On 2019-01-30 12:59 p.m., Jason Gunthorpe wrote:
> On Wed, Jan 30, 2019 at 12:45:46PM -0700, Logan Gunthorpe wrote:
>>
>>
>> On 2019-01-30 12:06 p.m., Jason Gunthorpe wrote:
>>>> Way less problems than not having struct page for doing anything
>>>>
On 2019-01-29 9:18 p.m., Jason Gunthorpe wrote:
> Every attempt to give BAR memory to struct page has run into major
> trouble, IMHO, so I like that this approach avoids that.
>
> And if you don't have struct page then the only kernel object left to
> hang meta data off is the VMA itself.
>
> I
On 2019-01-30 10:44 a.m., Jason Gunthorpe wrote:
> I don't see why a special case with a VMA is really that different.
Well one *really* big difference is the VMA changes necessarily expose
specialized new functionality to userspace which has to be supported
forever and may be difficult to chang
On 2019-01-30 2:50 p.m., Jason Gunthorpe wrote:
> On Wed, Jan 30, 2019 at 02:01:35PM -0700, Logan Gunthorpe wrote:
>
>> And I feel the GUP->SGL->DMA flow should still be what we are aiming
>> for. Even if we need a special GUP for special pages, and a special DMA
>&
On 2019-01-30 12:22 p.m., Jerome Glisse wrote:
> On Wed, Jan 30, 2019 at 06:56:59PM +, Jason Gunthorpe wrote:
>> On Wed, Jan 30, 2019 at 10:17:27AM -0700, Logan Gunthorpe wrote:
>>>
>>>
>>> On 2019-01-29 9:18 p.m., Jason Gunthorpe wrote:
>>>>
On 2019-01-30 12:06 p.m., Jason Gunthorpe wrote:
>> Way less problems than not having struct page for doing anything
>> non-trivial. If you map the BAR to userspace with remap_pfn_range
>> and friends the mapping is indeed very simple. But any operation
>> that expects a page structure, which i
On 2019-01-30 12:19 p.m., Jason Gunthorpe wrote:
> On Wed, Jan 30, 2019 at 11:13:11AM -0700, Logan Gunthorpe wrote:
>>
>>
>> On 2019-01-30 10:44 a.m., Jason Gunthorpe wrote:
>>> I don't see why a special case with a VMA is really that different.
>>
>&
On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
> +bool pci_test_p2p(struct device *devA, struct device *devB)
> +{
> + struct pci_dev *pciA, *pciB;
> + bool ret;
> + int tmp;
> +
> + /*
> + * For now we only support PCIE peer to peer but other inter-connect
> + * ca
On 2019-01-29 12:32 p.m., Jason Gunthorpe wrote:
> Jerome, I think it would be nice to have a helper scheme - I think the
> simple case would be simple remapping of PCI BAR memory, so if we
> could have, say something like:
>
> static const struct vm_operations_struct my_ops {
> .p2p_map = p2p
On 2019-01-29 12:56 p.m., Alex Deucher wrote:
> On Tue, Jan 29, 2019 at 12:47 PM wrote:
>>
>> From: Jérôme Glisse
>>
>> device_test_p2p() return true if two devices can peer to peer to
>> each other. We add a generic function as different inter-connect
>> can support peer to peer and we want to
On 2019-01-29 2:50 p.m., Jerome Glisse wrote:
> No this is the non HMM case i am talking about here. Fully ignore HMM
> in this frame. A GPU driver that do not support or use HMM in anyway
> has all the properties and requirement i do list above. So all the points
> i was making are without HMM i
On 2019-01-29 12:44 p.m., Jerome Glisse wrote:
>> I'd suggest [1] should be a part of the patchset so we can actually see
>> a user of the stuff you're adding.
>
> I did not wanted to clutter patchset with device driver specific usage
> of this. As the API can be reason about in abstract way.
I
On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
> + /*
> + * Optional for device driver that want to allow peer to peer (p2p)
> + * mapping of their vma (which can be back by some device memory) to
> + * another device.
> + *
> + * Note that the exporting device
On 2019-01-29 12:11 p.m., Jerome Glisse wrote:
> On Tue, Jan 29, 2019 at 11:36:29AM -0700, Logan Gunthorpe wrote:
>>
>>
>> On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
>>
>>> + /*
>>> +* Optional for device driver that want to allow peer
On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
> From: Jérôme Glisse
>
> device_test_p2p() return true if two devices can peer to peer to
> each other. We add a generic function as different inter-connect
> can support peer to peer and we want to genericaly test this no
> matter what the i
On 2019-01-29 1:57 p.m., Jerome Glisse wrote:
> GPU driver must be in control and must be call to. Here there is 2 cases
> in this patchset and i should have instead posted 2 separate patchset as
> it seems that it is confusing things.
>
> For the HMM page, the physical address of the page ie th
On 2019-01-29 4:47 p.m., Jerome Glisse wrote:
> The whole point is to allow to use device memory for range of virtual
> address of a process when it does make sense to use device memory for
> that range. So they are multiple cases where it does make sense:
> [1] - Only the device is accessing the
On 2019-01-29 12:44 p.m., Greg Kroah-Hartman wrote:
> On Tue, Jan 29, 2019 at 11:24:09AM -0700, Logan Gunthorpe wrote:
>>
>>
>> On 2019-01-29 10:47 a.m., jgli...@redhat.com wrote:
>>> +bool pci_test_p2p(struct device *devA, struct device *devB)
>>>
On 2018-11-30 3:28 p.m., Dan Williams wrote:
> On Fri, Nov 30, 2018 at 2:19 PM Logan Gunthorpe wrote:
>>
>> Hey,
>>
>> On 2018-11-29 11:51 a.m., Dan Williams wrote:
>>> Got it, let me see how bad moving arch_remove_memory() turns out,
>>> sounds
Hey,
On 2018-11-29 11:51 a.m., Dan Williams wrote:
> Got it, let me see how bad moving arch_remove_memory() turns out,
> sounds like a decent approach to coordinate multiple users of a single
> ref.
I've put together a patch set[1] that fixes all the users of
devm_memremap_pages() without moving
On 2018-11-28 8:10 p.m., Dan Williams wrote:
> Yes, please send a proper patch.
Ok, I'll send one shortly.
> Although, I'm still not sure I see
> the problem with the order of the percpu-ref kill. It's likely more
> efficient to put the kill after the put_page() loop because the
> percpu-ref w
On 2018-11-29 10:30 a.m., Dan Williams wrote:
> Oh! Yes, nice find. We need to wait for the percpu-ref to be dead and
> all outstanding references dropped before we can proceed to
> arch_remove_memory(), and I think this problem has been there since
> day one because the final exit was always aft
able
> to grep the kill routine directly at the devm_memremap_pages() call
> site.
>
> Cc:
> Fixes: e8d513483300 ("memremap: change devm_memremap_pages interface...")
> Reviewed-by: "Jérôme Glisse"
> Reported-by: Logan Gunthorpe
> Reviewed-by: Logan
On 02/04/18 11:20 AM, Jerome Glisse wrote:
> The point i have been trying to get accross is that you do have this
> information with dma_map_resource() you know the device to which you
> are trying to map (dev argument to dma_map_resource()) and you can
> easily get the device to which the memory
On 30/03/18 01:45 PM, Jerome Glisse wrote:
> Looking at upstream code it seems that the x86 bits never made it upstream
> and thus what is now upstream is only for ARM. See [1] for x86 code. Dunno
> what happen, i was convince it got merge. So yes current code is broken on
> x86. ccing Joerg mayb
On 02/04/18 01:16 PM, Jerome Glisse wrote:
> There isn't good API at the moment AFAIK, closest thing would either be
> lookup_resource() or region_intersects(), but a more appropriate one can
> easily be added, code to walk down the tree is readily available. More-
> over this can be optimize lik
On 29/03/18 07:58 PM, Jerome Glisse wrote:
> On Thu, Mar 29, 2018 at 10:25:52AM -0600, Logan Gunthorpe wrote:
>>
>>
>> On 29/03/18 10:10 AM, Christian König wrote:
>>> Why not? I mean the dma_map_resource() function is for P2P while other
>>> dma_map
On 29/03/18 05:44 AM, Christian König wrote:
> Am 28.03.2018 um 21:53 schrieb Logan Gunthorpe:
>>
>> On 28/03/18 01:44 PM, Christian König wrote:
>>> Well, isn't that exactly what dma_map_resource() is good for? As far as
>>> I can see it makes sure
On 28/03/18 12:28 PM, Christian König wrote:
> I'm just using amdgpu as blueprint because I'm the co-maintainer of it
> and know it mostly inside out.
Ah, I see.
> The resource addresses are translated using dma_map_resource(). As far
> as I know that should be sufficient to offload all the a
On 28/03/18 10:02 AM, Christian König wrote:
> Yeah, that looks very similar to what I picked up from the older
> patches, going to read up on that after my vacation.
Yeah, I was just reading through your patchset and there are a lot of
similarities. Though, I'm not sure what you're trying to a
On 28/03/18 09:07 AM, Christian König wrote:
> Am 28.03.2018 um 14:38 schrieb Christoph Hellwig:
>> On Sun, Mar 25, 2018 at 12:59:54PM +0200, Christian König wrote:
>>> From: "wda...@nvidia.com"
>>>
>>> Add an interface to find the first device which is upstream of both
>>> devices.
>> Please wo
On 29/03/18 10:10 AM, Christian König wrote:
> Why not? I mean the dma_map_resource() function is for P2P while other
> dma_map_* functions are only for system memory.
Oh, hmm, I wasn't aware dma_map_resource was exclusively for mapping
P2P. Though it's a bit odd seeing we've been working under
On 28/03/18 01:44 PM, Christian König wrote:
> Well, isn't that exactly what dma_map_resource() is good for? As far as
> I can see it makes sure IOMMU is aware of the access route and
> translates a CPU address into a PCI Bus address.
> I'm using that with the AMD IOMMU driver and at least the
On 23/10/17 10:08 AM, David Laight wrote:
It is also worth checking that the hardware actually supports p2p transfers.
Writes are more likely to be supported then reads.
ISTR that some intel cpus support some p2p writes, but there could easily
be errata against them.
Ludwig mentioned a PCIe s
On 22/10/17 12:13 AM, Petrosyan, Ludwig wrote:
> But at first sight it has to be simple:
> The PCIe Write transactions are address routed, so if in the packet header
> the other endpoint address is written the TLP has to be routed (by PCIe
> Switch to the endpoint), the DMA reading from the end
Hi Ludwig,
P2P transactions are still *very* experimental at the moment and take a
lot of expertise to get working in a general setup. It will definitely
require changes to the kernel, including the drivers of all the devices
you are trying to make talk to eachother. If you're up for it you ca
Hi Jyri,
Thanks for the ack. However, I'm reworking this patch set to use the
include/linux/io-64-nonatomic* headers which will explicitly devolve
into two 32-bit transfers. It's not clear whether this is appropriate
for the tilcdc driver as it was never setup to use 32-bit transfers
(unlike the o
On 6/26/2017 2:43 PM, Arnd Bergmann wrote:
This hardcodes the behavior of include/linux/io-64-nonatomic-hi-lo.h, which
I find rather confusing, as only about one in five drivers wants this
behavior.
I'd suggest you don't add it in lib/iomap.c at all for 32-bit architectures,
but rather use the s
Thanks Horia.
I'm inclined to just use your patch verbatim. I can set you as author,
but no matter how I do it, I'll need your Signed-off-by.
Logan
On 23/06/17 12:51 AM, Horia Geantă wrote:
> On 6/22/2017 7:49 PM, Logan Gunthorpe wrote:
>> Now that ioread64 and iowrite64 are
hacks that have accumulated in the kernel,
thus far, for working around this.
This set is based off of v4.12-rc6.
Thanks,
Logan
[1] https://marc.info/?l=linux-kernel&m=149774601910663&w=2
Logan Gunthorpe (7):
drm/tilcdc: don't use volatile with iowrite64
iomap: implement
Now that ioread64 and iowrite64 are always available we don't
need the ugly ifdefs to change their implementation when they
are not.
Signed-off-by: Logan Gunthorpe
Cc: "Horia Geantă"
Cc: Dan Douglass
Cc: Herbert Xu
Cc: "David S. Miller"
---
driver
On 6/22/2017 2:36 PM, Alan Cox wrote:
I think that makes sense for the platforms with that problem. I'm not
sure there are many that can't do it for mmio at least. 486SX can't do it
and I guess some ARM32 but I think almost everyone else can including
most 32bit x86.
What's more of a problem is
g the functions universally available to
configurations with CONFIG_GENERIC_IOMAP=y.
For any pio accesses, the 64bit operations remain unsupported and
simply call bad_io_access in cases readq would be called.
Signed-off-by: Logan Gunthorpe
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ell
This is a prep patch for adding a universal iowrite64.
The patch is to prevent compiler warnings when we add iowrite64 that
would occur because there is an unnecessary volatile in this driver.
Signed-off-by: Logan Gunthorpe
Cc: Jyri Sarha
Cc: Tomi Valkeinen
Cc: David Airlie
---
drivers/gpu
warning
on arm.
Signed-off-by: Logan Gunthorpe
Cc: Arnd Bergmann
---
include/asm-generic/io.h | 54 +---
1 file changed, 42 insertions(+), 12 deletions(-)
diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index 7ef015eb3403..817edaa3da78
On 6/22/2017 11:29 AM, Stephen Bates wrote:
+#define iowrite64be(v,p) iowrite32(cpu_to_be64(v), (p))
Logan, thanks for taking this cleanup on. I think this should be iowrite64 not
iowrite32?
Yup, good catch. Thanks. I'll fix it in a v2 of this series.
Logan
___
Now that ioread64 and iowrite64 are available generically we can
remove the hack at the top of ntb_hw_intel.c that patches them in
when they are not available.
Signed-off-by: Logan Gunthorpe
Cc: Jon Mason
Cc: Dave Jiang
Cc: Allen Hubbe
---
drivers/ntb/hw/intel/ntb_hw_intel.c | 30
Now that we can expect iowrite64 to always exist the hack is no longer
necessary so we just call iowrite64 directly.
Signed-off-by: Logan Gunthorpe
Cc: Jyri Sarha
Cc: Tomi Valkeinen
Cc: David Airlie
---
drivers/gpu/drm/tilcdc/tilcdc_regs.h | 6 --
1 file changed, 6 deletions(-)
diff
the alpha architecture.)
Signed-off-by: Logan Gunthorpe
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Matt Turner
---
arch/alpha/include/asm/io.h | 2 ++
arch/alpha/kernel/io.c | 18 ++
2 files changed, 20 insertions(+)
diff --git a/arch/alpha/include/asm/io.h b/arch/alpha/includ
On 6/22/2017 2:14 PM, Alan Cox wrote:
If a platform doesn't support 64bit I/O operations from the CPU then you
either need to use some kind of platform/architecture specific interface
if present or accept you don't have one.
Yes, I understand that.
The thing is that every user that's currently
On 6/22/2017 2:08 PM, Alan Cox wrote:
But this does not do the same thing as an ioread64 with regards to
atomicity or side effects on the device. The same is true of the other
hacks. You either have a real 64bit single read/write from MMIO space or
you don't. You can't fake it.
Yes, I know. But
On 6/22/2017 3:03 PM, Arnd Bergmann wrote:
Drivers that want a non-atomic variant should include either
include/linux/io-64-nonatomic-hi-lo.h or include/linux/io-64-nonatomic-lo-hi.h
depending on what they need. Drivers that require 64-bit I/O should
probably just depend on CONFIG_64BIT and maybe
and thus using an
invalid drm_debugfs_root.
This commit fixes this issue by setting drm_debugfs_root to NULL after
removal and checking that it's not NULL before using it.
Signed-off-by: Logan Gunthorpe
Cc: Daniel Vetter
Cc: Jani Nikula
Cc: Sean Paul
Cc: David Airlie
Link: https://lkm
On 28/04/17 11:51 AM, Herbert Xu wrote:
> On Fri, Apr 28, 2017 at 10:53:45AM -0600, Logan Gunthorpe wrote:
>>
>>
>> On 28/04/17 12:30 AM, Herbert Xu wrote:
>>> You are right. Indeed the existing code looks buggy as they
>>> don't take sg->offset i
On 28/04/17 12:30 AM, Herbert Xu wrote:
> You are right. Indeed the existing code looks buggy as they
> don't take sg->offset into account when doing the kmap. Could
> you send me some patches that fix these problems first so that
> they can be easily backported?
Ok, I think the only buggy one
On 27/04/17 12:53 AM, Christoph Hellwig wrote:
> I think you'll need to follow the existing kmap semantics and never
> fail the iomem version either. Otherwise you'll have a special case
> that's almost never used that has a different error path.
>
> Again, wrong way. Suddenly making things fai
On 27/04/17 02:53 PM, Jason Gunthorpe wrote:
> blkfront is one of the drivers I looked at, and it appears to only be
> memcpying with the bvec_data pointer, so I wonder why it does not use
> sg_copy_X_buffer instead..
Yes, sort of...
But you'd potentially end up calling sg_copy_to_buffer multip
On 26/04/17 09:56 PM, Herbert Xu wrote:
> On Tue, Apr 25, 2017 at 12:20:54PM -0600, Logan Gunthorpe wrote:
>> Very straightforward conversion to the new function in the caam driver
>> and shash library.
>>
>> Signed-off-by: Logan Gunthorpe
>> Cc: He
On 27/04/17 09:27 AM, Jason Gunthorpe wrote:
> On Thu, Apr 27, 2017 at 08:53:38AM +0200, Christoph Hellwig wrote:
> How about first switching as many call sites as possible to use
> sg_copy_X_buffer instead of kmap?
Yeah, I could look at doing that first.
One problem is we might get more Naks o
On 26/04/17 01:37 AM, Roger Pau Monné wrote:
> On Tue, Apr 25, 2017 at 12:21:02PM -0600, Logan Gunthorpe wrote:
>> Straightforward conversion to the new helper, except due to the lack
>> of error path, we have to use SG_MAP_MUST_NOT_FAIL which may BUG_ON in
>> certai
1 - 100 of 173 matches
Mail list logo