Introduction to PETSc @ MdlS/Idris

The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations (

It enables researchers to delegate the linear algebra part of their applications to a specialized team, and to test various solution methods. The course will provide the necessary basis to get started with PETSc and give an overview of its possibilities. Presentations will alternate with hands-on sessions (in C or Fortran).

Learning outcomes :

On completion of this course, the participant should
– Be able to build and solve simple PDE examples
– Use and compare different solvers on these examples
– Be familiar with using the on-line documentation
– Be able to easily explore other PETsc possibilities relevant to his/her application.

Prerequisites :

C or Fortran programming.
Notions of linear algebra, as well as notions of MPI, would be an asset.

Link toward the registration page

Introduction to ScaLAPACK and MAGMA libraries @ MdlS/Idris

The aim of this course is to introduce the basic usages of the ScaLAPACK and MAGMA libraries


The ScaLAPACK (Scalable Linear Algebra PACKage) is a library for high-performance dense linear algebra based on routines for distributed-memory message passing computers. It is mostly based on a subset of LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines redesigned for distributed memory MIMD parallel computers where all the MPI communications are handled by routines provided by the BLACS (Basic Linear Algebra Communication Subprograms) library.

The lecture will be mostly based on how to use the PBLAS  (Parallel BLAS) and ScaLAPACK libraries for linear algebra problems in HPC:
    • General introduction about the PBLAS and ScaLAPACK libraries
    • Main ideas how to decompose the linear algebra problems in parallel programming
    • Examples of basic operations with PBLAS : vector-vector, vector-matrix and matrix-matrix operations
    • Examples of basic operations with ScaLAPACK : inversion and diagonalization
    • Main problem based on calculating an exponentiation of a matrix


In the second part of the course, we present MAGMA (Matrix Algebra on GPU and Multicore Architectures) , a dense linear algebra library similar to LAPACK but for hybrid/heterogeneous architectures. We start by presenting basic concepts of GPU architecture and giving an overview of communication schemes between CPUs and GPUs. Then, we  briefly present hybrid CPU/GPU programming models using the CUDA language.  Finally, we present MAGMA and how it can be used to easily and efficiently accelerate scientific codes, particularly those already using BLAS and LAPACK.


    • Donfack Simplice (MAGMA)
    • Hasnaoui Karim (ScaLAPACK)

Prerequisites :

C or C++ and Fortran programming.
Notions of linear algebra, as well as notions of MPI, would be an asset.

Link toward the registration page

Performance portability for GPU application using high-level programming approaches with Kokkos @ MdlS/Idris

When developing a numerical simulation code with high performance and efficiency in mind, one is often compelled to accept a trade-off between using a native-hardware programming model (like CUDA or OpenCL), which has become tremendously challenging, and loosing some cross-platform portability.

Porting a large existing legacy code to a modern HPC platform, and developing a new simulation code, are two different tasks that may be benefit from a high-level programming model, which abstracts the low-level hardware details.

This training presents existing high-level programming solutions that can preserve at best as possible performance, maintainability and portability across the vast diversity of modern hardware architectures (multicore CPU, manycore, GPU, ARM, ..) and software development productivity.

We will  provide an introduction to the high-level C++ programming model Kokkos, and show basic code examples  to illustrate the following concepts through hands-on sessions:

  • hardware portability: design an algorithm once and let the Kokkos back-end (OpenMP, CUDA, …) actually derive an efficient low-level implementation;
  • efficient architecture-aware memory containers: what is a Kokkos::view;
  • revisit fundamental parallel patterns with Kokkos: parallel for, reduce, scan, … ;
  • explore some mini-applications.

Several detailed examples in C/C++/Fortran will be used in hands-on session on the high-end hardware platform Ouessant (, equipped with Nvidia Pascal GPUs.

Some basic knowledge of the CUDA programming model and of C++.

Link toward the registration page

Optimization @ MdlS/CINES

This training will present basic elements to enable developpers to understand when and how to optimize the performance of their codes.

Optimization :

  • Compiler options
  • Vectorization – Data access (cache usage maximization)
  • Theory to upper-bound the expected performance benefit (speedup, efficiency, peak, memory bandwidth, …)
  • Examples of Stencil codes and N-body simulations

Half of the course will be made of hands-on sessions. The hands-on will use the library


  • Bertrand Cirou
  • Mathieu Cloirec
  • Cédric Jourdain
  • Umesh Seth
  • Dorian Midou
  • Naima Alaoui

Learning outcomes 
Ability to understand the main issues for code optimization, knowledge of the main tools and techniques for basic debugging.

ID Card, ZRR access, Basic knowledge of Unix, programming experience in C++, OpenMP

Link toward the registration page

Uncertainty quantification @MdlS

Uncertainty in computer simulations, deterministic and probabilistic methods for quantifying uncertainty, OpenTurns software, Uranie software

Uncertainty quantification takes into account the fact that most inputs to a simulation code are only known imperfectly. It seeks to translate this uncertainty of the data to improve the results of the simulation. This training will introduce the main methods and techniques by which this uncertainty propagation can be handled without resorting to an exhaustive exploration of the data space. HPC plays an important role in the subject, as it provides the computing power made necessary by the large number of simulations needed.
The course will present the most important theoretical tools for probability and statistical analysis, and will illustrate the concepts using the OpenTurns software.

Course Outline

Day 1 : Methodology of Uncertainty Treatment – Basics of Probability and Statistics
•    General Uncertainty Methodology (30’) : A. Dutfoy
•    Probability and Statistics: Basics (45’) : G. Blondet
•    General introduction to Open TURNS and Uranie (2 * 30’) : G. Blondet, J.B. Blanchard
•    Introduction to Python and Jupyter (45’): practical work on distributions manipulations
•    Uncertainty Quantification (45’) : J.B. Blanchard
•    OpenTURNS – Uranie practical works: sections 1, 2 (1h): G. Blondet,  J.B. Blanchard,  A. Dutfoy
•    Central tendency and Sensitivity analysis (1h): A. Dutfoy

Day 2 : Quantification, Propagation and Ranking of Uncertainties
•    Application to OpenTURNS and Uranie (1h): section 3 M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard
•    Estimation of probability of rare events (1h): G. Blondet
•    Application to OpenTURNS and Uranie (1h): M. Baudin, G. Blondet, F. Gaudier, J.B. Blanchard
•    Distributed computing (1h) : Uranie (15’, F. Gaudier, J.B. Blanchard), OpenTURNS (15’, G. Blondet), Salome et OpenTURNS (30’, O. Mircescu)
•    Optimisation and Calibration (1h) : J.B. Blanchard, M. Baudin
•    Application to OpenTURNS and Uranie (1h): J.B. Blanchard, M. Baudin

Day 3 : HPC aspects – Meta model
•    HPC aspects specific to the Uncertainty treatment (1h) : K. Delamotte
•    Introduction to Meta models (validation, over-fitting) – Polynomial chaos expansion (1h) : JB Blanchard, C. Mai,
•    Kriging meta model (1h): C. Mai
•    Application to OpenTURNS and Uranie (2h) : C. Mai, G. Blondet, J.B. Blanchard
•    Discussion /  Participants projects

Learning outcomes
Learn to recognize when uncertainty quantification can bring new insight to simulations.
Know the main tools and techniques to investigate uncertainty propagation.
Gain familiarity with modern tools for actually carrying out the computations in a HPC context.

Basic knowledge of probability will be useful, as will a basic familiarity with Linux.

Link toward the registration page

Introduction to machine learning in Python with Scikit-learn @ MdlS/ICM

The rapid growth of artificial intelligence and data science has made scikit-learn one of the most popular Python libraries. The tutorial will present the main components of scikit-learn, covering aspects such as standard classifiers and regressors, cross-validation, or pipeline construction, with examples from various fields of application. Hands-on sessions will focus on medical applications, such as classification for computer-aided diagnosis or regression for the prediction of clinical scores.

Learning outcomes :

Ability to solve a real-world machine learning problem with scikit-learn

Prerequisites :

  • Basic knowledge of Python (pandas, numpy)
  • Notions of machine learning
  • No prior medical knowledge is required

Link toward the registration page