On 2/4/22 08:21, Thomas Schwinge wrote:
Hi Tom!

Taking this one to the mailing list; not directly related to PR104364:

On 2022-02-03T13:35:55+0000, "vries at gcc dot gnu.org via Gcc-bugs" 
<gcc-b...@gcc.gnu.org> wrote:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104364

I've tested this using (recommended) driver 470.94 on boards:

(As not every user will be using the recommended/latest, I too am doing
some testing also on oldish Nvidia/CUDA Driver versions.)  Combinatorial
explosion is a problem, of course...


I am starting to suspect that I misinterpreted the nvidia website. When asking for a driver for a board, I get some driver, which I took to be the recommended one.

But I started to notice changes in recommended version from 470.x to 510.x, which suggest the recommended one is just the latest one they've updated in the set of recommended drivers. So it seems inaccurate to talk about "the" recommended driver.

Thanks for the testing, much appreciated. I'm currently testing with 390.x.

while iterating over dimensions { -mptx=3.1 , -mptx=6.3 } x { GOMP_NVPTX_JIT=-O0, 
<default> }.

Do you use separate (nvptx-none offload target only?) builds for
different '-mptx' variants (likewise: '-misa'), or have you hacked up the
multilib configuration?

Neither, I'm using --target_board=unix/foffload= for that.

 ('gcc/config/nvptx/t-nvptx:MULTILIB_OPTIONS'
etc., I suppose?)  Should we add a few representative configurations to
be built by default?  And/or, should we have a way to 'configure' per
user needs (I suppose: '--with-multilib-list=[...]', as supported for a
few other targets?)?  (I see there's also a new
'--with-multilib-generator=[...]', haven't looked in detail.)  No matter
which way: again, combinatorial explosion is a problem, of course...


As far as I know, the gcc build doesn't finish when switching default to higher than sm_35, so there's little point to go to a multilib setup at this point. But once we fix that, we could reconsider, otherwise, things are likely to regress again.

Thanks,
- Tom

Reply via email to