On Tue, Jun 07, 2022 at 01:28:33PM +0100, Andrew Stubbs wrote:
> > For performance boost of what kind of code?
> > I don't understand how Cuda API could be useful (or can be used at all) if
> > offloading to NVPTX isn't involved.  The fact that somebody asks for host
> > memory allocation with omp_atk_pinned set to true doesn't mean it will be
> > in any way related to NVPTX offloading (unless it is in NVPTX target region
> > obviously, but then mlock isn't available, so sure, if there is something
> > CUDA can provide for that case, nice).
> 
> This is specifically for NVPTX offload, of course, but then that's what our
> customer is paying for.
> 
> The expectation, from users, is that memory pinning will give the benefits
> specific to the active device. We can certainly make that happen when there
> is only one (flavour of) offload device present. I had hoped it could be one
> way for all, but it looks like not.

I think that is just an expectation that isn't backed by anything in the
standard.
When users need something like that (but would be good to describe what
it is, memory that will be primarily used for interfacing the offloading
device 0 (or some specific device given by some number), or memory that
can be used without remapping on some offloading device, something else?
And when we know what exactly that is (e.g. what Cuda APIs or GCN APIs etc.
can provide), discuss on omp-lang whether there shouldn't be some standard
way to ask for such an allocator.  Or there is always the possibility of
extensions.  Not sure if one can just define ompx_atv_whatever, use some
large value for it (but the spec doesn't have a vendor range which would be
safe to use) and support it that way.

Plus a different thing is allocators in the offloading regions.
I think we should translate some omp_alloc etc. calls in such regions
when they use constant expression standard allocators to doing the
allocation through other means, or allocators.c can be overridden or
amended for the needs or possibilities of the offloading targets.

        Jakub

Reply via email to