The aim of this course is to introduce the basic usages of the ScaLAPACK and MAGMA libraries
ScaLAPACK :
The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for highperformance dense linear algebra based on routines for distributedmemory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library.

 General introduction about the PBLAS and ScaLAPACK libraries
 Main ideas how to decompose the linear algebra problems in parallel programming
 Examples of basic operations with PBLAS : vectorvector, vectormatrix and matrixmatrix operations
 Examples of basic operations with ScaLAPACK : inversion and diagonalization
 Main problem based on calculating an exponentiation of a matrix
MAGMA:
In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we briefly present hybrid CPU/GPU programming models using the CUDA language. Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK.
Trainers:

 Donfack Simplice (MAGMA)
 Hasnaoui Karim (ScaLAPACK)
Prerequisites :
C or C++ and Fortran programming.
Notions of linear algebra, as well as notions of MPI, would be an asset.