The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for high-performance dense linear algebra based on routines for distributed-memory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library.
- General introduction about the PBLAS and ScaLAPACK libraries
- Main ideas how to decompose the linear algebra problems in parallel programming
- Examples of basic operations with PBLAS : vector-vector, vector-matrix and matrix-matrix operations
- Examples of basic operations with ScaLAPACK : inversion and diagonalization
- Main problem based on calculating an exponentiation of a matrix
In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we briefly present hybrid CPU/GPU programming models using the CUDA language. Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK.
- Donfack Simplice (MAGMA)
- Hasnaoui Karim (ScaLAPACK)
C or C++ and Fortran programming.
Notions of linear algebra, as well as notions of MPI, would be an asset.