Using Intel PSXE / OneAPI

Intel PSXE has been superseded by Intel OneAPI, which is now available on the Nestum cluster. More information on using oneAPI and how it differs from PSXE can be found in this post: Intel OneAPI toolchain now available on Nestum – Nestum Cluster (sofiatech.bg)

First you need to establish interactive session in (interact.p) partition on cn001 node using following command:

srun -p interact.p --pty bash

Then Intel compiler module intel should be loaded:

module load intel

The intel module enable the access to all Intel Compilers and libraries installed on nestum:

Intel C/C++ compiler (icc)

Manual page for the ICC compiler can be found here. Additional information regarding the compiler as well as user guides can be found on the Intel website.

Intel Fortran 77/90/95 compiler (ifort)

Manual page for the ifrot compiler can be found here. Again, the Intel website provides user guides and references.

Math Kernel Library (MKL)

The MKL libraries support can be enabled with compiler option -mkl. For details please read following documentation.

OpenMP usage

In addition to high-level code optimizations, the Intel Compilers also enable threading through automatic parallelization and OpenMP support. It is important to set up your program in such way that SLURM understands your code and gets the right values for its environmental variables. Here is an example

#!/bin/bash
#SBATCH -p medium.p # partition (queue)
#SBATCH -N 1 # number of nodes (must be 1)
#SBATCH -n 32 # number of cores (must be between 1 and 32)
#SBATCH -t 0-2:00 # time (D-HH:MM)
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -e slurm.%N.%j.err # STDERR

export OMP_NUM_THREADS=$SLURM_NTASKS

module load intel
Auto paralization option with OpenMP.

The Intel compilers also support automatic parallelization. With automatic parallelization, the compiler detects loops that can be safely and efficiently executed in parallel and generates multithreaded code. Adding the -parallel option to the compile command is the only action required of the programmer.

OpenMPI

In case you want to build or develop parallel code with OpenMPI PAI compiled with Intel Compilers you can use one of two available modules openmpi/1.10.3-intel-2017 or openmpi/1.8.8-intel-2017. Again you should create interactive session on compute node cn001 first. An example would be:

#!/bin/bash
#
#SBATCH -p medium.p # partition (queue)
#SBATCH -N 1 # number of nodes
#SBATCH -n 32 # number of cores
#SBATCH -t 0-2:00 # time (D-HH:MM)
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -e slurm.%N.%j.err # STDERR

module load openmpi

mpirun "yourprogram"

Finally, here is the list of available MPI modules.

openmpi/1.10.3-intel-2017
openmpi/1.6.5-gcc-os
openmpi/1.8.8-gcc-os(default)
openmpi/1.8.8-intel-201
Scroll to Top