Skip to content

EPCCed/archer2-AMPP-2024-06-27

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation






ARCHER2 Advanced MPI course (June 2024)

CC BY-NC-SA 4.0

David Henty EPCC: 27-28 June 2024 09:30 - 17:00 BST, online

This course is aimed at programmers seeking to deepen their understanding of MPI and explore some of its more recent and advanced features. We cover topics including exploiting shared-memory access from MPI programs, communicator management and advanced use of collectives. We also look at performance aspects such as which MPI routines to use for scalability, MPI internal implementation issues and overlapping communication and calculation. Intended learning outcomes

  • Understanding of how internal MPI implementation details affect performance
  • Techniques for overlapping communications and calculation
  • Familiarity with neighbourhood collective operations in MPI
  • Understanding of best practice for MPI+OpenMP programming
  • Knowledge of MPI memory models for RMA operations

Prerequisites

Attendees should be familiar with MPI programming in C, C++ or Fortran, e.g. have attended the ARCHER2 MPI course.

Requirements

Participants must bring a laptop with a Mac, Linux, or Windows operating system (not a tablet, Chromebook, etc.) that they have administrative privileges on.

They are also required to abide by the ARCHER2 Code of Conduct.

Timetable (all times are in British Summer Time)

Although the start and end times will be as indicated below, this is a draft timetable based on a previous run of the course and the details may change for this run.

Unless otherwise indicated all material is Copyright © EPCC, The University of Edinburgh, and is only made available for private study.

Day 1: Thursday 27th June

Day 2: Friday 28th June

Exercise Material

Unless otherwise indicated all material is Copyright © EPCC, The University of Edinburgh, and is only made available for private study.

Day 1

SLURM batch scripts are set to run in the short queue and should work any time. However, on days when the course is running, we have special reserved queues to guarantee fast turnaround.

The reserved queue for today is called ta161_1261863. To use this queue, change the --qos and --reservation lines to:

#SBATCH --qos=reservation
#SBATCH --reservation=ta161_1261863
  • Ping-pong exercise sheet

  • Ping-pong source code

  • Description of 3D halo-swapping benchmark is in this README

  • Download the code directly to ARCHER2 using: git clone https://github.com/davidhenty/halobench

    • compile with make -f Makefile-archer2
    • submit with sbatch archer2.job
  • Other things you could do with the halo swapping benchmark:

    • change the buffer size to be very small ( a few tens of bytes) or very large (bigger than the eager limit) to see if that affects the results;
    • run on different numbers of nodes.
  • Note that you will need to change the number of repetitions to get reasonable runtimes: many more for smaller messages, many fewer for larger messages. Each test needs to run for at least a few seconds to give reliable results.

  • The halobench program contains an example of using MPI_Neighbor_alltoall() to do pairwise swaps of data between neighbouring processes in a regular 3D grd

  • Tomorrows traffic modelling problem sheet also contains a final MPI exercise in Section 3 to replace point-to-point boundary swapping with neighbourhood collectives.

Day 2

The reserved queue for today is called ta161_1261866. To use this queue, change the --qos and --reservation lines to:

#SBATCH --qos=reservation
#SBATCH --reservation=ta161_1261866

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

CC BY-NC-SA 4.0

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published