Following some feedback from users of the OG11 branch I think I need to withdraw this patch, for now.

The memory pinned via the mlock call does not give the expected performance boost. I had not expected that it would do much in my test setup, given that the machine has a lot of RAM and my benchmarks are small, but others have tried more and on varying machines and architectures.

It seems that it isn't enough for the memory to be pinned, it has to be pinned using the Cuda API to get the performance boost. I had not done this because it was difficult to resolve the code abstraction difficulties and anyway the implementation was supposed to be device independent, but it seems we need a specific pinning mechanism for each device.

I will resubmit this patch with some kind of Cuda/plugin hook soonish, keeping the existing implementation for other device types. I don't know how that'll handle heterogenous systems, but those ought to be rare.

I don't think libmemkind will resolve this performance issue, although certainly it can be used for host implementations of low-latency memories, etc.

Andrew

On 13/01/2022 13:53, Andrew Stubbs wrote:
On 05/01/2022 17:07, Andrew Stubbs wrote:
I don't believe 64KB will be anything like enough for any real HPC application. Is it really worth optimizing for this case?

Anyway, I'm working on an implementation using mmap instead of malloc for pinned allocations. I figure that will simplify the unpin algorithm (because it'll be munmap) and optimize for large allocations such as I imagine HPC applications will use. It won't fix the ulimit issue.

Here's my new patch.

This version is intended to apply on top of the latest version of my low-latency allocator patch, although the dependency is mostly textual.

Pinned memory is allocated via mmap + mlock, and allocation fails (returns NULL) if the lock fails and there's no fallback configured.

This means that large allocations will now be page aligned and therefore pin the smallest number of pages for the size requested, and that that memory will be unpinned automatically when freed via munmap, or moved via mremap.

Obviously this is not ideal for allocations much smaller than one page. If that turns out to be a problem in the real world then we can add a special case fairly straight-forwardly, and incur the extra page tracking expense in those cases only, or maybe implement our own pinned-memory heap (something like already proposed for low-latency memory, perhaps).

Also new is a realloc implementation that works better when reallocation fails. This is confirmed by the new testcases.

OK for stage 1?

Thanks

Andrew

Reply via email to