UVA HPC CURSUS June 2024 - STEP UP TO SUPERCOMPUTING

6. Programming HPC systems with OpenMP and MPI

6. Programming HPC systems with OpenMP and MPI

Content:

Part1: The Message Passing Interface (MPI) is the de-facto standard for programming scalable compute architectures with distributed memory architectures like, for example, supercomputers or clusters or workstations. In MPI instances of the same program run Independently on each compute node, but thesis instances exchange data via explicit messages. The course gives an introduction to the message-passing paradigm of parallel programming and illustrates the most relevant features of MPI-including collective communication operations.

Part2: OpenMP is the most wide-spread standard for programming shared memory parallel computers, ie the majority of today's multi-core processor based desktop and server systems. The approach taken by OpenMP is to augment (mostly) ordinary C or Fortran programs with compiler directives, so-called pragmas. These directives instruct an OpenMP-aware compiler where to safely generate parallel executable code from otherwise arrival sequential programs.

 

 

 

  • Duration: 4 hours.
  • Date and Time: see Schedule.
  • Location: Science Park 904, Room: see Schedule.
  • Target group: Researchers wishing to build or modify applications and who are interested in using the possibilities offered by modern hardware.
  • Prerequisites: Introduction Unix; Lisa Using or Using Huygens; Knowledge of the C.
  • Course leader: Kees van Reeuwijk (Informatics Institute).

Back