Hi! C says that aligned_alloc size must be an integral multiple of alignment. While glibc doesn't care about it, apparently Solaris does. So, this patch decreases the priority of aligned_alloc among the other variants because it needs more work and can waste more memory and rounds up the size to multiple of alignment.
Bootstrapped/regtested on x86_64-linux and i686-linux and in the PR Rainer mentioned testing on Solaris, committed to trunk. 2021-11-18 Jakub Jelinek <ja...@redhat.com> PR libgomp/102838 * alloc.c (gomp_aligned_alloc): Prefer _aligned_alloc over memalign over posix_memalign over aligned_alloc over fallback with malloc instead of aligned_alloc over _aligned_alloc over posix_memalign over memalign over fallback with malloc. For aligned_alloc, round up size up to multiple of al. --- libgomp/alloc.c.jj 2021-01-04 10:25:56.157037659 +0100 +++ libgomp/alloc.c 2021-11-17 13:32:25.246271672 +0100 @@ -65,18 +65,24 @@ gomp_aligned_alloc (size_t al, size_t si void *ret; if (al < sizeof (void *)) al = sizeof (void *); -#ifdef HAVE_ALIGNED_ALLOC - ret = aligned_alloc (al, size); -#elif defined(HAVE__ALIGNED_MALLOC) +#ifdef HAVE__ALIGNED_MALLOC ret = _aligned_malloc (size, al); -#elif defined(HAVE_POSIX_MEMALIGN) - if (posix_memalign (&ret, al, size) != 0) - ret = NULL; #elif defined(HAVE_MEMALIGN) { extern void *memalign (size_t, size_t); ret = memalign (al, size); } +#elif defined(HAVE_POSIX_MEMALIGN) + if (posix_memalign (&ret, al, size) != 0) + ret = NULL; +#lif defined(HAVE_ALIGNED_ALLOC) + { + size_t sz = (size + al - 1) & ~(al - 1); + if (__builtin_expect (sz >= size, 1)) + ret = aligned_alloc (al, sz); + else + ret = NULL; + } #else ret = NULL; if ((al & (al - 1)) == 0 && size) Jakub