Currently OpenBlas does what it wants for multithreading.
We hesitated to disable it but prefered to wait and think about it:
see https://trac.sagemath.org/ticket/21323.

You can still influence its use of threads setting OPENBLAS_NUM_THREADS.
See the trac ticket, just note that this is not Sage specific.
And as you discovered, it seems it is also influenced by OMP_NUM_THREADS...

On Wednesday, October 5, 2016 at 9:28:23 AM UTC+2, tdumont wrote:
>
> What is the size of the matrix you use ? 
> Whatever you do, openmp in blas is interesting only if you compute with 
> large matrices. 
> If your computations are embedded  in an @parallel and launch n 
> processes, be careful  that your  OMP_NUM_THREADS be less or equal to 
> ncores/n. 
>
> My experience is (I am doing numerical computations)  that there are 
> very few cases where using openmp in blas libraries is interesting. 
> Parallelism should generally be searched at a higher level. 
>
> One of the interest of multithreaded blas is for constructors: with 
> Intel's mkl blas, you can obtain the maximum possible performances of 
> tah machines  when you use DGEMM (ie product of matrices), due to the 
> high arithmetic intensity of matrix vector products. On my 2x8 core 
> sandy bridge à 2.7GHZ, I have obtained more that 300 giga flops, but 
> with matrices of size > 1000 ! And this is only true for DGEMM.... 
>
> t.d. 
>
> Le 04/10/2016 à 20:26, Jonathan Bober a écrit : 
> > See the following timings: If I start Sage with OMP_NUM_THREADS=1, a 
> > particular computation takes 1.52 cpu seconds and 1.56 wall seconds. 
> > 
> > The same computation without OMP_NUM_THREADS set takes 12.8 cpu seconds 
> > and 1.69 wall seconds. This is particularly devastating when I'm running 
> > with @parallel to use all of my cpu cores. 
> > 
> > My guess is that this is Linbox related, since these computations do 
> > some exact linear algebra, and Linbox can do some multithreading, which 
> > perhaps uses OpenMP. 
> > 
> > jb12407@lmfdb1:~$ OMP_NUM_THREADS=1 sage 
> > [...] 
> > SageMath version 7.4.beta6, Release Date: 2016-09-24 
> > [...] 
> > Warning: this is a prerelease version, and it may be unstable. 
> > [...] 
> > sage: %time M = ModularSymbols(5113, 2, -1) 
> > CPU times: user 509 ms, sys: 21 ms, total: 530 ms 
> > Wall time: 530 ms 
> > sage: %time S = M.cuspidal_subspace().new_subspace() 
> > CPU times: user 1.42 s, sys: 97 ms, total: 1.52 s 
> > Wall time: 1.56 s 
> > 
> > 
> > jb12407@lmfdb1:~$ sage 
> > [...] 
> > SageMath version 7.4.beta6, Release Date: 2016-09-24 
> > [...] 
> > sage: %time M = ModularSymbols(5113, 2, -1) 
> > CPU times: user 570 ms, sys: 18 ms, total: 588 ms 
> > Wall time: 591 ms 
> > sage: %time S = M.cuspidal_subspace().new_subspace() 
> > CPU times: user 3.76 s, sys: 9.01 s, total: 12.8 s 
> > Wall time: 1.69 s 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sage-devel" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sage-devel+...@googlegroups.com <javascript:> 
> > <mailto:sage-devel+unsubscr...@googlegroups.com <javascript:>>. 
> > To post to this group, send email to sage-...@googlegroups.com 
> <javascript:> 
> > <mailto:sage-...@googlegroups.com <javascript:>>. 
> > Visit this group at https://groups.google.com/group/sage-devel. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to