Description:One of the key advantages of using HPC is the possibility to do a parallelisation of the problem. In terms of products analysis, problem parallelisation means efficiently dividing a large problem into multiple smaller ones and analysing each of them separately, thus raising the level of detail and speeding up the necessary calculation times. It is foreseen that knowledge of programming will have to be embraced (e.g. programming with Python, C/C++, Fortran) and further knowledge will be implemented with the research of parallel programming with MPI as well as shared-memory parallelisation with OpenMP.
Workflow:The distribution of covered topics through 5 days training is foreseen as:
- Day 1; The first day of training is devoted to an introduction about HPC with a focus on parallelisation dealing with both hardware and software aspects. The different ways of building parallel hardware, i.e., shared-memory that is found in multi-core CPUs versus distributed-memory found in HCP clusters, as well as accelerators, and a combination of all of the above in today's modern state-of-the-art HPC systems, call for different parallel programming paradigms to fully exploit the capabilities of the HPC clusters. For the purpose of training event management usage of the e-learning platform has to be explained. Each participant will participate with computer and will be granted access to a local HPC cluster.
- Day 2; The second day of training is devoted to the basic features of the Message Passing Interface (MPI) which is the dominant programming model used on the largest HPC clusters worldwide. Participants will listen to lectures given by educators and professionals. The lectures will be interleaved with hands-on labs so that the students can immediately test and understandthe basic constructs of MPI.
- Day 3; The third day of training will focus on shared-memory parallelisation with OpenMP. Again the participants will listen to lectures given by educators and professionals that will be interleaved with hands-on labs.
- Day4; The forth day of training will cover intermediate and some more advanced features of MPI that seem to be of special importance for the topics of HPC in Engineering with a focus on FEM and CFD covered in O1 and O2, respectively, like communicator splitting, virtual cartesian topologies and parallel I/O. Again the participants will listen to lectures given by educators and professionals that will be interleaved with hands-on labs.
- Day 5; Finally, the last training day will deal with accelerator programming (OpenACC/CUDA) in the morning session, while the afternoon will focus on how to combine the three different programming models (MPI, OpenMP, OpenACC/CUDA) within one software to fully exploit today's modern state-of-the-art HPC systems. Again the participants will listen to lectures given by educators and professionals that will be interleaved with hands-on labs.