BLAS will be faster for (non-trivial sized) matrix multiplications, but it 
doesn't apply to component-wise operations (.*, ./).

For component-wise operations, devectorization here shouldn't give much of 
a speedup. The main speedup will actually come from things like loop fusing 
which gets rid of intermediates that are made when doing something like 
A.*B.*exp(C).

For this equation, you can devectorize it using the Devectorize.jl macro:

@devec Mr = m.*m

At least I think that should work. I should basically generate the code you 
wrote to get the efficiency without the ugly C/C++ like extra code.

On Saturday, July 2, 2016 at 1:11:49 AM UTC+2, baillot maxime wrote:
>
> @Tim Holy : Thank you for the web page. I didn't know it. Now I understand 
> a lot of thing :)
>
> @Kristoffer and Patrick: I just read about that in the link that Tim gave 
> me. I did change the code and the time just past from 0.348052 seconds to 
>  0.037768 seconds.
>
> Thanks to you all. Now I understand a lot of things and why it was slower 
> than matlab.
>
> So now I understand why a lot of people was speaking about Devectorizing 
> matrix calculus. But I think it's sad, because if I want to do this I will 
> use C or C++ .  Not a matrixial language like Julia or Matlab.
>
> Anyway! So if I'm not mistaking... It's better for me to create a "mul()" 
> function than use the ".*" ?
>

Reply via email to