beanz added a comment.

In D133668#3847871 <https://reviews.llvm.org/D133668#3847871>, @rjmccall wrote:

> But that's purely on the implementation level, right?  Everything is 
> implicitly vectorized and you're just specifying the computation of a single 
> lane, but as far as that lane-wise computation is concerned, you just have 
> expressions which produce scalar values.

Yes, with the caveat that the language doesn't dictate the maximum SIMD width, 
but some features have minimum widths. The language source (and IR) operate on 
one lane of scalar and vector values, but we do have cross-SIMD lane 
operations, and true scalar (uniform) values, so we have to model the full 
breadth of parallel fun...

> If you don't otherwise have 16-bit (or 8-bit?) types, and it's the type 
> behavior you want, I'm fine with you just using `_BitInt`.  I just want to 
> make sure I understand the situation well enough to feel confident that 
> you've considered the alternatives.

We don't currently have 8-bit types (although I fully expect someone will want 
them because ML seems to love small data types). I suspect that the promoting 
integer behaviors for types smaller than int will likely never make sense for 
HLSL (or really any SPMD/implicit-SIMD) programming model).

My inclination is that we should define all our smaller than `int` fixed-size 
integer types as `_BitInt` to opt-out of promotion. Along with that I expect 
that we'll disable `short` under HLSL. We will still have `char`, but the 
intent for `char` is really for the _extremely_ limited cases where strings get 
used (i.e. `printf` for debugging).


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D133668/new/

https://reviews.llvm.org/D133668

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to