The gathered material focuses on parallel programming with the Message Passing Interface (MPI), OpenMP and CUDA.
One of the key advantages of using HPC is the possibility to do a parallelisation of the problem. In terms of products analysis, problem parallelisation means efficiently dividing a large problem into multiple smaller ones and analysing each of them separately, thus raising the level of detail and speeding up the necessary calculation times. The three dominating programming models that are employed on today’s modern HPC hardware are presented. On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model, whereas OpenMP can be used on shared memory (i.e., on one CPU or on the CPUs of one node of a cluster) and CUDA helps to exploit the capabilities of GPUs.
Skills to be gained:
- Understand the main parallelisation principles
- Take advantage of shared and distributed memory systems as well as accelerators
- Write parallel programs using MPI, OpenMP and CUDA
- Parallelise serial programs by means of MPI, OpenMP and CUDA
- Combine MPI with OpenMP or MPI with CUDA