On Wed, Jun 6, 2018 at 11:57 AM, Joel Sherrill <j...@rtems.org> wrote:
>
> On Wed, Jun 6, 2018 at 10:51 AM, Paul Menzel <
> pmenzel+gcc.gnu....@molgen.mpg.de> wrote:
>
> > Dear GCC folks,
> >
> >
> > Some scientists in our organization still want to use the Intel compiler,
> > as they say, it produces faster code, which is then executed on clusters.
> > Some resources on the Web [1][2] confirm this. (I am aware, that it’s
> > heavily dependent on the actual program.)
> >
>
> Do they have specific examples where icc is better for them? Or can point
> to specific GCC PRs which impact them?
>
>
> GCC versions?
>
> Are there specific CPU model variants of concern?
>
> What flags are used to compile? Some times a bit of advice can produce
> improvements.
>
> Without specific examples, it is hard to set goals.

If I could perhaps jump in here for a moment...  Just today I hit upon
a series of small (in lines of code) loops that gcc can't vectorize,
and intel vectorizes like a madman.  They all involve a lot of heavy
use of std::vector<std::vector<float>>.  Comparisons were with gcc
8.1, intel 2018.u1, an AMD Opteron 6386 SE, with the program running
as sched_FIFO, mlockall, affinity set to its own core, and all
interrupts vectored off that core.  So, as close to not-noisy as
possible.

I was surprised at the results results, but using each compiler's methods of
dumping vectorization info, intel wins on two points:

1) It actually vectorizes
2) It's vectorizing output is much more easily readable

Options were:

gcc -Wall -ggdb3 -std=gnu++17 -flto -Ofast -march=native

vs:

icc -Ofast -std=gnu++14


So, not exactly exact, but pretty close.


So here's an example of a chunk of code (not very readable, sorry
about that) that intel can vectorize, and subsequently make about 50%
faster:

        std::size_t nLayers { input.nn.size() };
        //std::size_t ySize = std::max_element(input.nn.cbegin(),
input.nn.cend(), [](auto a, auto b){ return a.size() < b.size();
})->size();
        std::size_t ySize = 0;
        for (auto const & nn: input.nn)
                ySize = std::max(ySize, nn.size());

        float yNorm[ySize];
        for (auto & y: yNorm)
                y = 0.0f;
        for (std::size_t i = 0; i < xSize; ++i)
                yNorm[i] = xNorm[i];
        for (std::size_t layer = 0; layer < nLayers; ++layer) {
                auto & nn = input.nn[layer];
                auto & b = nn.back();
                float y[ySize];
                for (std::size_t i = 0; i < nn[0].size(); ++i) {
                        y[i] = b[i];
                        for (std::size_t j = 0; j < nn.size() - 1; ++j)
                                y[i] += nn.at(j).at(i) * yNorm[j];
                }
                for (std::size_t i = 0; i < ySize; ++i) {
                        if (layer < nLayers - 1)
                                y[i] = std::max(y[i], 0.0f);
                        yNorm[i] = y[i];
                }
        }


If I was better at godbolt, I could show the asm, but I'm not.  I'm
willing to learn, though.

Reply via email to