Hello.
I have Getfem 5.4.2 installed on Ubuntu by means of the Mantic repository.
I would like to know if there’s a way to check if the Blas/Lapack libraries are correctly linked and used in my Python scripts.
Thank you.
Lorenzo
in Ubuntu/Debian alternative blas/lapack versions, installed from the repo, are handled by “update-alternatives”, e.g. in my system:
update-alternatives --config libopenblas.so-x86_64-linux-gnu
gives:
There are 3 choices for the alternative libopenblas.so-x86_64-linux-gnu (providing /usr/lib/x86_64-linux-gnu/libopenblas.so).
Selection Path Priority Status
------------------------------------------------------------
0 /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblas.so 100 auto mode
1 /usr/lib/x86_64-linux-gnu/openblas-openmp/libopenblas.so 95 manual mode
2 /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblas.so 100 manual mode
* 3 /usr/lib/x86_64-linux-gnu/openblas-serial/libopenblas.so 90 manual mode
openblas is the best free blas you can run on your system, and despite the name, it provides both blas and lapack. make sure that you are using the serial version as above. I could not see any gain from using the openmp or pthread version.
some more general hints about discovering linked dependencies on linux.
If I would like to know what libraries the mumps solver in my system links to, I run
ldd /usr/lib/x86_64-linux-gnu/libdmumps_seq.so
linux-vdso.so.1 (0x00007f2ba657f000)
liblapack.so.3 => /lib/x86_64-linux-gnu/liblapack.so.3 (0x00007f2ba5a00000)
libmumps_common_seq-5.6.so => /lib/x86_64-linux-gnu/libmumps_common_seq-5.6.so (0x00007f2ba64bc000)
libgfortran.so.5 => /lib/x86_64-linux-gnu/libgfortran.so.5 (0x00007f2ba5600000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f2ba611e000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f2ba5417000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f2ba648d000)
libopenblas.so.0 => /lib/x86_64-linux-gnu/libopenblas.so.0 (0x00007f2ba3000000)
libesmumps-7.0.so => /lib/x86_64-linux-gnu/libesmumps-7.0.so (0x00007f2ba6484000)
libscotch-7.0.so => /lib/x86_64-linux-gnu/libscotch-7.0.so (0x00007f2ba596e000)
/lib64/ld-linux-x86-64.so.2 (0x00007f2ba6581000)
libscotcherr-7.0.so => /lib/x86_64-linux-gnu/libscotcherr-7.0.so (0x00007f2ba647f000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f2ba645e000)
libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007f2ba595b000)
liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f2ba592b000)
I see that mumps links to /lib/x86_64-linux-gnu/liblapack.so.3
, but this file is just a symbolic link, to know exactly which lapack is used, I need to dig deeper with
readlink -e /lib/x86_64-linux-gnu/liblapack.so.3
/usr/lib/x86_64-linux-gnu/openblas-serial/liblapack.so.3
so, know I know which lapack is actually used by mumps.
same about openblas
readlink -e /lib/x86_64-linux-gnu/libopenblas.so.0
/usr/lib/x86_64-linux-gnu/openblas-serial/libopenblas-r0.3.27.so
Hello Kostas,
I run update-alternatives --config libopenblas.so-x86_64-linux-gnu
with the following result:
There is 1 choice for the alternative libopenblas.so-x86_64-linux-gnu (providing /usr/lib/x86_64-linux-gnu/libopenblas.so).
Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblas.so 100 auto mode
1 /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblas.so 100 manual mode
So the Ubuntu Mantic repository installed the openblas-pthread version by default during the Getfem installation.
Should I install and switch to openblas-serial? Is it faster than openblas-pthread?
When you write “I could not see any gain from using the openmp or pthread version.”
do you mean that the MPI Parallelization is not effective in Getfem?
I’m interested because MPI Parallelization is something I’d like to test in the future.
Thank you,
Lorenzo
I found libdmumps_seq
named libdmumps_seq-5.6.1.so
and running ldd /usr/lib/x86_64-linux-gnu/libdmumps_seq-5.6.1.so
returned the following:
linux-vdso.so.1 (0x00007ffd37bab000)
liblapack.so.3 => /lib/x86_64-linux-gnu/liblapack.so.3 (0x000078bd72c00000)
libmumps_common_seq-5.6.so => /lib/x86_64-linux-gnu/libmumps_common_seq-5.6.so (0x000078bd7337d000)
libgfortran.so.5 => /lib/x86_64-linux-gnu/libgfortran.so.5 (0x000078bd72800000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x000078bd72b15000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x000078bd736a5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x000078bd72400000)
libopenblas.so.0 => /lib/x86_64-linux-gnu/libopenblas.so.0 (0x000078bd70128000)
libesmumps-7.0.so => /lib/x86_64-linux-gnu/libesmumps-7.0.so (0x000078bd7369a000)
libscotch-7.0.so => /lib/x86_64-linux-gnu/libscotch-7.0.so (0x000078bd7276b000)
/lib64/ld-linux-x86-64.so.2 (0x000078bd736e7000)
libscotcherr-7.0.so => /lib/x86_64-linux-gnu/libscotcherr-7.0.so (0x000078bd73695000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x000078bd73674000)
libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x000078bd73661000)
liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x000078bd7334b000)
that seems to be exactly the same output that you achieved.
Then, running readlink -e /lib/x86_64-linux-gnu/liblapack.so.3
returned the following link:
/usr/lib/x86_64-linux-gnu/openblas-pthread/liblapack.so.3
confirming the openblas-pthread.
So, the mumps solver seems to be linked to lapack, scotch, etc… right?
How can I be sure that these libraries are used in the python scripts?
Reading the following link
Interface with BLAS, LAPACK or ATLAS — GetFEM
it is mentioned that in C++ one has to add a line code like
#define GMM_USES_LAPACK
or to specify
-DGMM_USES_LAPACK
on the command line of the compiler.
What about the python code? Is the Blas/Lapack interface always on?
Is there a way to check that the interface is working with mumps and also with all the other solvers?
Thank you,
Lorenzo
I do not think that openblas-pthread is necessarily slower than openblas-serial, but since I could not see any gain from blas-multithreading, I prefer to just use openblas-serial instead
This is also in order to avoid a possible clash with getfem-multithreading, when using a multithreaded (OpenMP) version of GetFEM. For enabling multithreading in GetFEM you have to compile it yourself. It works pretty well if you want to accelerate the assembly.
If you also want to accelerate the linsolves with mumps, then you need to compile GetFEM with MPI-parallelization (OpenMPI) and withouth multithreading (i.e. without OpenMP).
MPI version of GetFEM works quite well for let’s say up to 16 cores, if your model is large enough. Also when used through python. But, MPI is a beast, to get it work properly you need to know what you are doing, if you just let it choose arbitrary cores on a laptop with fake cores, it might not scale at all.