Hi, I was playing around with a self compiled version and, and a the Conda binary of Petsc on the same problem, on my M1 Mac. Interestingly I found that the Conda binary solves the problem 2-3 times slower vs the self compiled version. (For context I’m using the petsc4py python interface)
I’ve attached two log views to show the comparison. I was mostly curious about the possible cause for this. I was also curious how I could use my own compiled version of PETSc in my Conda install? Best, Jorge
****************************************************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 160 CHARACTERS. Use
'enscript -r -fCourier9' to print this document
***
****************************************************************************************************************************************************************
------------------------------------------------------------------ PETSc
Performance Summary:
------------------------------------------------------------------
Unknown Name on a named Jorges-MacBook-Pro.local with 1 processor, by jorgenin
Thu Oct 19 15:37:21 2023
Using Petsc Release Version 3.20.0, unknown
Max Max/Min Avg Total
Time (sec): 3.811e+01 1.000 3.811e+01
Objects: 0.000e+00 0.000 0.000e+00
Flops: 6.859e+09 1.000 6.859e+09 6.859e+09
Flops/sec: 1.800e+08 1.000 1.800e+08 1.800e+08
MPI Msg Count: 0.000e+00 0.000 0.000e+00 0.000e+00
MPI Msg Len (bytes): 0.000e+00 0.000 0.000e+00 0.000e+00
MPI Reductions: 0.000e+00 0.000
Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N
flops
and VecAXPY() for complex vectors of length N -->
8N flops
Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- --
Message Lengths -- -- Reductions --
Avg %Total Avg %Total Count %Total
Avg %Total Count %Total
0: Main Stage: 3.6071e+01 94.7% 6.8586e+09 100.0% 0.000e+00 0.0%
0.000e+00 0.0% 0.000e+00 0.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting
output.
Phase summary info:
Count: number of times phase was executed
Time and Flop: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
AvgLen: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and
PetscLogStagePop().
%T - percent time in this phase %F - percent flop in this phase
%M - percent messages in this phase %L - percent message lengths in
this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all
processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flop
--- Global --- --- Stage ---- Total
Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct
%T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
BuildTwoSided 1 1.0 1.0000e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetGraph 1 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetUp 1 1.0 3.0000e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFPack 240 1.0 2.4000e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFUnpack 240 1.0 2.2000e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNorm 100 1.0 5.1900e-04 1.0 3.79e+06 1.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 7295
VecCopy 100 1.0 5.0400e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 120 1.0 2.1300e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 100 1.0 8.5000e-04 1.0 3.79e+06 1.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 4454
VecScatterBegin 240 1.0 7.1500e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecScatterEnd 240 1.0 2.6300e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatSolve 100 1.0 8.3799e-01 1.0 3.50e+09 1.0 0.0e+00 0.0e+00
0.0e+00 2 51 0 0 0 2 51 0 0 0 4175
MatLUFactorSym 1 1.0 8.8490e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatLUFactorNum 100 1.0 2.2694e+01 1.0 2.93e+09 1.0 0.0e+00 0.0e+00
0.0e+00 60 43 0 0 0 63 43 0 0 0 129
MatAssemblyBegin 200 1.0 6.0000e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 200 1.0 7.5900e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 100 1.0 1.3336e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
PCSetUp 100 1.0 2.2783e+01 1.0 2.93e+09 1.0 0.0e+00 0.0e+00
0.0e+00 60 43 0 0 0 63 43 0 0 0 129
PCApply 100 1.0 8.3815e-01 1.0 3.50e+09 1.0 0.0e+00 0.0e+00
0.0e+00 2 51 0 0 0 2 51 0 0 0 4174
KSPSetUp 20 1.0 5.0000e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 100 1.0 1.9006e+01 1.0 5.84e+09 1.0 0.0e+00 0.0e+00
0.0e+00 50 85 0 0 0 53 85 0 0 0 307
------------------------------------------------------------------------------------------------------------------------
Object Type Creations Destructions. Reports information only for
process 0.
--- Event Stage 0: Main Stage
Viewer 1 0
Index Set 2 2
IS L to G Mapping 1 0
Star Forest Graph 1 0
Vector 4 1
Matrix 1 0
--- Event Stage 1: PCMPI
========================================================================================================================
Average time to get PetscTime(): 0.
#PETSc Option Table entries:
-nls_solve_ksp_max_it 30 # (source: code)
-nls_solve_ksp_type preonly # (source: code)
-nls_solve_log_view log # (source: code)
-nls_solve_pc_factor_mat_solver_type mumps # (source: code)
-nls_solve_pc_type lu # (source: code)
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --prefix=petsc_real --download-mumps --download-ptscotch
--download-parmetis --download-metis --download-scalapack --download-bison
--download-hypre --with-petsc4py --download-suitesparse --with-debugging=no
COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native
-mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" -D
PYTHON-EXECUTABLE=python3.11
-----------------------------------------
Libraries compiled on 2023-10-18 18:33:43 on Jorges-MacBook-Pro.local
Machine characteristics: macOS-14.0-arm64-arm-64bit
Using PETSc directory:
/Users/jorgenin/Documents/Python/Research/Libraries/petsc/petsc_real
Using PETSc arch:
-----------------------------------------
Using C compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-unknown-pragmas
-fstack-protector -fno-stack-check -Qunused-arguments -fvisibility=hidden -O3
-march=native -mtune=native
Using Fortran compiler: mpif90 -fPIC -Wall -ffree-line-length-none
-ffree-line-length-0 -Wno-lto-type-mismatch -Wno-unused-dummy-argument -O3
-march=native -mtune=native
-----------------------------------------
Using include paths:
-I/Users/jorgenin/Documents/Python/Research/Libraries/petsc/petsc_real/include
-I/opt/X11/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries:
-Wl,-rpath,/Users/jorgenin/Documents/Python/Research/Libraries/petsc/petsc_real/lib
-L/Users/jorgenin/Documents/Python/Research/Libraries/petsc/petsc_real/lib
-lpetsc
-Wl,-rpath,/Users/jorgenin/Documents/Python/Research/Libraries/petsc/petsc_real/lib
-L/Users/jorgenin/Documents/Python/Research/Libraries/petsc/petsc_real/lib
-Wl,-rpath,/opt/X11/lib -L/opt/X11/lib
-Wl,-rpath,/opt/homebrew/Cellar/open-mpi/4.1.6/lib
-L/opt/homebrew/Cellar/open-mpi/4.1.6/lib
-Wl,-rpath,/opt/homebrew/opt/hwloc/lib -L/opt/homebrew/opt/hwloc/lib
-Wl,-rpath,/opt/homebrew/opt/libevent/lib -L/opt/homebrew/opt/libevent/lib
-Wl,-rpath,/opt/homebrew/Cellar/gcc/13.2.0/lib/gcc/current/gcc/aarch64-apple-darwin23/13
-L/opt/homebrew/Cellar/gcc/13.2.0/lib/gcc/current/gcc/aarch64-apple-darwin23/13
-Wl,-rpath,/opt/homebrew/Cellar/gcc/13.2.0/lib/gcc/current/gcc
-L/opt/homebrew/Cellar/gcc/13.2.0/lib/gcc/current/gcc
-Wl,-rpath,/opt/homebrew/Cellar/gcc/13.2.0/lib/gcc/current
-L/opt/homebrew/Cellar/gcc/13.2.0/lib/gcc/current -lHYPRE -lspqr -lumfpack
-lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig
-ldmumps -lmumps_common -lpord -lpthread -lscalapack -llapack -lblas
-lptesmumps -lptscotchparmetisv3 -lptscotch -lptscotcherr -lesmumps -lscotch
-lscotcherr -lparmetis -lmetis -lX11 -lmpi_usempif08 -lmpi_usempi_ignore_tkr
-lmpi_mpifh -lmpi -lgfortran -ld_classic -lgfortran -lemutls_w -lquadmath -lc++
-----------------------------------------
****************************************************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 160 CHARACTERS. Use
'enscript -r -fCourier9' to print this document
***
****************************************************************************************************************************************************************
------------------------------------------------------------------ PETSc
Performance Summary:
------------------------------------------------------------------
Unknown Name on a named Jorges-MacBook-Pro.local with 1 processor, by jorgenin
Thu Oct 19 15:40:16 2023
Using Petsc Release Version 3.20.0, Sep 28, 2023
Max Max/Min Avg Total
Time (sec): 1.403e+02 1.000 1.403e+02
Objects: 0.000e+00 0.000 0.000e+00
Flops: 6.815e+09 1.000 6.815e+09 6.815e+09
Flops/sec: 4.856e+07 1.000 4.856e+07 4.856e+07
MPI Msg Count: 0.000e+00 0.000 0.000e+00 0.000e+00
MPI Msg Len (bytes): 0.000e+00 0.000 0.000e+00 0.000e+00
MPI Reductions: 0.000e+00 0.000
Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N
flops
and VecAXPY() for complex vectors of length N -->
8N flops
Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- --
Message Lengths -- -- Reductions --
Avg %Total Avg %Total Count %Total
Avg %Total Count %Total
0: Main Stage: 1.3815e+02 98.4% 6.8152e+09 100.0% 0.000e+00 0.0%
0.000e+00 0.0% 0.000e+00 0.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting
output.
Phase summary info:
Count: number of times phase was executed
Time and Flop: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
AvgLen: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and
PetscLogStagePop().
%T - percent time in this phase %F - percent flop in this phase
%M - percent messages in this phase %L - percent message lengths in
this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all
processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flop
--- Global --- --- Stage ---- Total
Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct
%T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
BuildTwoSided 1 1.0 1.0000e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetGraph 1 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetUp 1 1.0 3.0000e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFPack 240 1.0 3.3000e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFUnpack 240 1.0 1.8000e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNorm 100 1.0 7.3040e-03 1.0 3.79e+06 1.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 518
VecCopy 100 1.0 5.5500e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 120 1.0 2.3800e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 100 1.0 5.8210e-03 1.0 3.79e+06 1.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 650
VecScatterBegin 240 1.0 8.2100e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecScatterEnd 240 1.0 4.2800e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatSolve 100 1.0 2.6633e+00 1.0 3.59e+09 1.0 0.0e+00 0.0e+00
0.0e+00 2 53 0 0 0 2 53 0 0 0 1349
MatLUFactorSym 1 1.0 1.0006e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatLUFactorNum 100 1.0 1.2305e+02 1.0 2.79e+09 1.0 0.0e+00 0.0e+00
0.0e+00 88 41 0 0 0 89 41 0 0 0 23
MatAssemblyBegin 200 1.0 4.2000e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 200 1.0 7.5100e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 100 1.0 1.3714e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
PCSetUp 100 1.0 1.2315e+02 1.0 2.79e+09 1.0 0.0e+00 0.0e+00
0.0e+00 88 41 0 0 0 89 41 0 0 0 23
PCApply 100 1.0 2.6634e+00 1.0 3.59e+09 1.0 0.0e+00 0.0e+00
0.0e+00 2 53 0 0 0 2 53 0 0 0 1349
KSPSetUp 20 1.0 1.8000e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 100 1.0 9.3857e+01 1.0 5.83e+09 1.0 0.0e+00 0.0e+00
0.0e+00 67 86 0 0 0 68 86 0 0 0 62
------------------------------------------------------------------------------------------------------------------------
Object Type Creations Destructions. Reports information only for
process 0.
--- Event Stage 0: Main Stage
Viewer 1 0
Index Set 2 2
IS L to G Mapping 1 0
Star Forest Graph 1 0
Vector 4 1
Matrix 1 0
--- Event Stage 1: PCMPI
========================================================================================================================
Average time to get PetscTime(): 0.
#PETSc Option Table entries:
-nls_solve_ksp_max_it 30 # (source: code)
-nls_solve_ksp_type preonly # (source: code)
-nls_solve_log_view log # (source: code)
-nls_solve_pc_factor_mat_solver_type mumps # (source: code)
-nls_solve_pc_type lu # (source: code)
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: AR=arm64-apple-darwin20.0.0-ar CC=mpicc CXX=mpicxx
FC=mpifort CFLAGS="-ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2
-pipe -isystem /Users/jorgenin/Documents/Python/Research/2.077/.conda/include
" CPPFLAGS="-D_FORTIFY_SOURCE=2 -isystem
/Users/jorgenin/Documents/Python/Research/2.077/.conda/include
-mmacosx-version-min=11.0" CXXFLAGS="-ftree-vectorize -fPIC -fPIE
-fstack-protector-strong -O2 -pipe -stdlib=libc++ -fvisibility-inlines-hidden
-fmessage-length=0 -isystem
/Users/jorgenin/Documents/Python/Research/2.077/.conda/include "
FFLAGS="-march=armv8.3-a -ftree-vectorize -fPIC -fno-stack-protector -O2 -pipe
-isystem /Users/jorgenin/Documents/Python/Research/2.077/.conda/include "
LDFLAGS="-Wl,-pie -Wl,-headerpad_max_install_names -Wl,-dead_strip_dylibs
-Wl,-rpath,/Users/jorgenin/Documents/Python/Research/2.077/.conda/lib
-L/Users/jorgenin/Documents/Python/Research/2.077/.conda/lib" LIBS="-lmpifort
-lgfortran" --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3
--with-clib-autodetect=0 --with-cxxlib-autodetect=0
--with-fortranlib-autodetect=0 --with-debugging=0 --with-blas-lib=libblas.dylib
--with-lapack-lib=liblapack.dylib --with-yaml=1 --with-hdf5=1 --with-fftw=1
--with-hwloc=0 --with-hypre=1 --with-metis=1 --with-mpi=1 --with-mumps=1
--with-parmetis=1 --with-pthread=1 --with-ptscotch=1 --with-shared-libraries
--with-ssl=0 --with-scalapack=1 --with-superlu=1 --with-superlu_dist=1
--with-suitesparse=1 --with-x=0 --with-scalar-type=real --with-batch
--prefix=/Users/jorgenin/Documents/Python/Research/2.077/.conda
-----------------------------------------
Libraries compiled on 2023-10-10 12:44:13 on Mac-1696940584996.local
Machine characteristics: macOS-11.7.10-x86_64-i386-64bit
Using PETSc directory:
/Users/runner/miniforge3/conda-bld/petsc_1696941332205/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla
Using PETSc arch:
-----------------------------------------
Using C compiler: mpicc -ftree-vectorize -fPIC -fPIE -fstack-protector-strong
-O2 -pipe -isystem
/Users/runner/miniforge3/conda-bld/petsc_1696941332205/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include
-O3 -D_FORTIFY_SOURCE=2 -isystem
/Users/runner/miniforge3/conda-bld/petsc_1696941332205/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include
-mmacosx-version-min=11.0
Using Fortran compiler: mpifort -march=armv8.3-a -ftree-vectorize -fPIC
-fno-stack-protector -O2 -pipe -isystem
/Users/runner/miniforge3/conda-bld/petsc_1696941332205/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include
-O3 -D_FORTIFY_SOURCE=2 -isystem
/Users/runner/miniforge3/conda-bld/petsc_1696941332205/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include
-mmacosx-version-min=11.0
-----------------------------------------
Using include paths:
-I/Users/runner/miniforge3/conda-bld/petsc_1696941332205/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpifort
Using libraries:
-Wl,-rpath,/Users/runner/miniforge3/conda-bld/petsc_1696941332205/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/lib
-L/Users/runner/miniforge3/conda-bld/petsc_1696941332205/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_pla/lib
-lpetsc -lHYPRE -lspqr -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd
-lcamd -lamd -lsuitesparseconfig -ldmumps -lmumps_common -lpord -lpthread
-lscalapack -lsuperlu -lsuperlu_dist -lfftw3_mpi -lfftw3 -llapack -lblas
-lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch
-lscotcherr -lparmetis -lmetis -lhdf5_hl -lhdf5 -lyaml -lmpifort -lgfortran
-----------------------------------------
smime.p7s
Description: S/MIME cryptographic signature
