Hi Joe,

Thanks for your message, which is asking several questions at once.
A bunch of answers below:

1) The number of contributors and contributor activity are an important
metric to guess whether a project will receive continued maintenance
over the years.  On this record, nsimd seems to have one main
contributor and very few community contributions.

2) The Arrow *format* is expected to be performant on both the CPU and
the GPU.  However, the C++ Arrow *library* is primarily a CPU library.
It does have a GPU component (currently only CUDA), but it's limited to
basic utilities such as memory buffers and IPC.  Note that we are open
to contributions if someone wants to expand the scope of the GPU layer.

3) In this discussion, we are looking for a C++ abstraction library over
individual SIMD instructions.  By design, this wouldn't provide any GPU
speedup.  To use GPU capabilities, we would have to think at a much
higher level (and it would impose a lot of additional constraints to try
and extract maximum performance on both the CPU and the GPU, using the
same code).

The nsimd README is quite explicit that GPU capabilities are only
provided for the "SPMD" layer (Single Program Multiple Data), not
"SIMD".  So, even if we choose nsimd, it will not give us GPU
acceleration for free.

4) AFAIK, SVE is not available in the wild, apart from some Fujitsu
supercomputers.  It may (and probably will?) become more widespread in
the future, of course.

5) You say "SIMD libraries are definitely not equal on performance".  Do
you have data points on this?

6) Static analysis is a bit orthogonal to this.  For the record, I once
managed to feed Arrow into Coverity Scan (while it is nominally "free as
in beer" for open source projects, there are several hoops to jump
through that make it quite impractical).  It did uncover a couple of
minor issues, but nothing really serious AFAIR.  See
https://issues.apache.org/jira/browse/ARROW-4019 and
https://issues.apache.org/jira/browse/ARROW-5629

Rather than static analysis, we currently rely on ASAN, UBSAN and TSAN
builds and CI runs (Address / Undefined Behaviour / Thread Sanitizer,
respectively).  Arrow C++ is also fuzzed by the OSS-Fuzz infrastructure
operated by Google (https://google.github.io/oss-fuzz/).  See
https://arrow.apache.org/docs/developers/cpp/fuzzing.html

In any case, though, while I may be mistaken, I don't expect a SIMD
abstraction library to really benefit from static or runtime analysis,
given the nature of the abstraction and the things abstracted.


Thanks for your input, and feel free to bring more data points on the
SIMD abstraction front.

Best regards

Antoine.


Le 16/02/2021 à 01:23, Joe Duarte a écrit :
> Hi all -- I looked at some of the SIMD libraries listed by Yibo Cai earlier 
> in the thread, and you might want to take a closer look at nsimd. It looks 
> very polished and has CUDA support, the only one I noticed that took account 
> of GPUs.
> 
> To wit, in what ways is Arrow optimized for GPU compute? I'm new to Arrow and 
> I noticed this bit on the homepage: "...organized for efficient analytic 
> operations on modern hardware like CPUs and GPUs." Does that mean there's 
> actual code targeting GPUs, e.g. CUDA, OpenCL, or C++ AMP (Microsoft)? Or is 
> it more of a thoughtful pre-emptive GPU-readiness, so to speak, in the 
> format's design?
> 
> Getting back to the SIMD library decision, my humble feedback is that you 
> might want to approach it with a bit more evaluative attention. The number of 
> GitHub stars and contributors seemed to be the major or driving 
> considerations in the parts of the thread that I saw. GitHub stars wouldn't 
> make my top-3 criteria, and might not make my list at all. I'm not even sure 
> what that metric signifies -- general interest or something? (For the 
> unfamiliar, it's not a star rating like for movies, but just a count.) It 
> seems there's a lot more to look at than star count or contributor count, for 
> example *performance*. SIMD libraries are definitely not equal on 
> performance. Bugginess too -- I wish there were easier ways --maybe automated 
> -- to evaluate projects and libraries on code quality. And I assume there are 
> Arrow project-specific criteria that would matter too, which would be 
> completely orthogonal to number of stars on GitHub.
> 
> Nsimd looks polished, and that might be because it's from a company 
> specializing in high-performance computing: https://agenium-scale.com I 
> hadn't heard of them, but it looks good. One thing that confuses me is that 
> most of the nsimd code is under "include/nsimd/modules/fixed_point". There's 
> no mention of floating point, and there's hardly any code outside of that 
> tree, and I'm not sure why fixed point would be the focus. They don't seem to 
> talk about it, or I missed it. Not sure if this will matter for Arrow. Their 
> CUDA support stands out, but I couldn't find much code. Their Arm SVE support 
> also stands out, but it's not clear that SVE actually exists in the wild. 
> It's Arm's Scalable Vector Extension, which allows SIMD code to be written 
> once and automatically adapted to different vector lengths as needed 
> depending on the CPU. Arm's SIMD is typically 128 bits wide, and with SVE 256 
> and 512 bit widths become trivial, but I don't know of any implementations. 
> Do Amazon's new Graviton2 chips support it? I hadn't heard that, or any 
> support from Cavium or Marvel or whomever in the Arm server space. SVE is 
> very new.
> 
> For code quality checking, you could throw a library up onto Coverity Scan. 
> It's free for open-source projects. It would be useful for Arrow too if 
> you're not using it already. Automated static analysis, and they support C 
> and C++ code, among others.
> 
> Anyway, those are my thoughts for now.
> 
> Cheers,
> 
> Joe Duarte
> 
> -----Original Message-----
> From: Antoine Pitrou <anto...@python.org> 
> Sent: Saturday, February 13, 2021 2:49 AM
> To: dev@arrow.apache.org
> Subject: Re: [C++] adopting an SIMD library - xsimd
> 
> On Fri, 12 Feb 2021 20:47:21 -0800
> Micah Kornfield <emkornfi...@gmail.com> wrote:
>> That is unfortunate, like I said if the consensus is xsimd, let's move 
>> forward with that.
> 
> I would say it's a soft consensus for now, and I would welcome more 
> viewpoints on the matter.
> 
> Regards
> 
> Antoine.
> 

Reply via email to