RELEASED: HPE Cray Programming Environment 24.07 General Availability Release for HPE Cray Supercomputing EX and HPE Cray Supercomputing Systems with HPCM 1.11 COS 24.7 SLES15 SP5 - Rev.A – August 2024

PRODUCT DESCRIPTION

The HPE Cray Programming Environment 24.07 release for HPE Cray Supercomputing EX and HPE Cray Supercomputing systems is now available for HPCM 1.11 COS 24.7 SLES15 SP5 systems.

  • COS 24.7 consists of COS Base 3.1.0/USS 1.1.0.

  • NOTE: COS 2.4 (based on SLES 15 SP4) is not supported in this release.
    COS 2.4 users must use the CPE 23.09 (or earlier) release.

SOFTWARE OVERVIEW

This software announcement applies to HPE Cray Supercomputing EX and/or HPE Cray Supercomputing HPCM 1.11 COS 24.7 SLES15 SP5 systems.

HPE Cray Programming Environment licensed customers may download the release materials by submitting a request to HPE PointNext for the respective package below (depending on entitlement):

  • HPE Cray Programming Environment DOCS for HPCM 24.07-DOCS

  • HPE Cray Programming Environment for HPCM 24.07

See “module help <product>” for the corresponding PE product release notes.

Content for HPCM 1.11 COS 24.7 SLES15 SP5 Systems:

  • cpe-24.07-sles15-sp5-hpcm-24.7.2.tar.gz

NOTE: The single tarfile(s) above contain base PE as well as optional installation of other third-party compiler support.

  • REQUIRED: Base PE

  • OPTIONAL:

  • AMD AOCC 4.2.0

  • AMD ROCm 6.1 on SLES15 SP5

  • Intel OneAPI 2024.0.2

  • NVIDIA HPC SDK 24.03

  • CUDA 12.3

  • GCC 13

Documentation

(NOTE: Documentation**, including revision documentation,** may be found on support.hpe.com; plus many CPE docs are also released in a tar file as part of the official release package as well. To find a specific document on support.hpe.com, simply do a search on “pubno 24.07” without the quotation marks. For example, search on S-8022 24.07 to find the CPE 24.07 version of the S-8022 document.)

Installation Guide

  • HPE Cray Programming Environment Installation Guide: HPCM on HPE Cray Supercomputing EX and HPE Cray Supercomputing Systems (24.07), S-8022

HPE_CPE_Installation_Guide_HPCM_on_HPE_Cray_Supercomputing_EX_Systems_24.07_S-8022.pdf

User Guide

  • HPE Cray Programming Environment User Guide: HPCM on HPE Cray Supercomputing EX and HPE Cray Supercomputing Systems (24.07), S-8023

HPE_CPE_User_Guide_HPCM_on_HPE_Cray_Supercomputing_EX_Systems_24.07_S-8023.pdf

Other CPE Product Documentation

  • HPE Performance Analysis Tools User Guide (24.07) (S-8014)

HPE_Performance_Analysis_Tools_User_Guide_24.07_S-8014.pdf

  • HPE Cray Cassini Performance Counters User Guide (24.07) (S-9929)

HPE_Cray_Cassini_Performance_Counters_User_Guide_24.07_S-9929.pdf

  • CCE 18.0.0 Documentation

  • HPE Cray Fortran Reference Manual (18.0.0) (S-3901)

HPE_Cray_Fortran_Reference_Manual_18.0.0_S-3901.pdf

  • HPE Cray Clang C and C++ Quick Reference (18.0.0) (S-2179)

HPE_Cray_Clang_C_and_Cplusplus_Quick_Reference_18.0.0_S-2179.pdf

  • HPE Cray Compiling Environment Release Overview (18.0.0) (S-5212)

HPE_Cray_Compiling_Environment_Release_Overview_18.0.0_S-5212.pdf

Other Documentation

  • CPE-24.07-HPCM-Release-Announcement.pdf

HPE Cray PE Release Information Available on Github for HPE Cray Supercomputing EX Systems

Release information for HPE Cray Programming Environment releases for HPE Cray Supercomputing EX systems is posted on GitHub here:

The information posted includes the release notes for each HPE CPE release (starting with the HPE CPE 21.08 release) as well as links for finding other HPE CPE documentation on support.hpe.com. At this time, the posted release notes are based on SLES15 SP5 -based HPE Cray Supercomputing EX systems. The release notes files will also continue to be posted as part of the release packages.

Software Supported:

The following software is supported with this release:

  • HPE Performance Cluster Manager (HPCM) 1.11 - managed systems running SLES15 SP5 on the compute/login nodes.

  • COS 24.7 components comprise:

    • COS Base

    • HPE Cray Supercomputing User Services Software (USS)

    • HPE SUSE Linux Enterprise Server

IMPORTANT NOTES

  • NOTE: Hidden symbol errors when linking Fortran with CCE 18.0.0 Linking Fortran applications may fail with an error message of the form “hidden symbol `<SYMBOL>’ in <LIB> is referenced by DSO”. If this error message is seen, it can usually be worked around by adding ‘-lgcc_s’ to your link line.  This is known to affect use of craypat and cray-parallel-netcdf but may be seen without them as well.

  • NOTE: NVIDIA HPC SDK releases include three versions of CUDA. For each new SDK version supported by HPE, Cray PE supports only the latest CUDA version included in the respective NVIDIA HPC SDK release (i.e., the older two CUDA versions are not supported by CPE when a new SDK is supported).

  • NOTE: CUDA 12.0 is compatible with GCC 11 (and below).

  • NOTE: Starting with CPE 23.12, cray-gcc packages will no longer be provided for SLES and COS based systems. Instead, SLES and COS based systems must use SLES-provided gcc*, gcc*-c++, and gcc*-fortran packages (available from the SLES Development Tools Module). Customers who wish to continue to use cray-gcc should continue to use CPE 23.09 or earlier CPE releases.

  • NOTE: Starting with CPE 23.12, the ROCm modulefile can now be created with craypkg-gen. The generated implementation of the Lmod ROCm modulefile works with the CPE amd modulefile.

  • NOTE: Perftools 24.07.0 and PAPI 7.1.0.2 included with CPE 24.07 are not compatible with ROCm earlier than 6.0.0, they require the use of ROCm 6.0.0 or later.

  • NOTE: Starting with CPE 24.03, on systems with aarch64 CPUs and NVIDIA GPUs, if
    MPICH_GPU_SUPPORT_ENABLED=1 is set, HPE Cray MPI automatically chooses to disable the use of XPMEM for intra-node, inter-process MPI data movement operations that involve memory regions that are managed by system allocators (e.g., mmap, malloc, and new). This is done to guard against potential interactions between XPMEM and the GPU runtime layer that may cause node failures to occur. Instead of using XPMEM, HPE Cray MPI uses Linux Cross Memory Attach (CMA) to optimize these MPI operations. It is important to note that GPU Peer2Peer IPC will continue to be used for intra-node, inter-process data movement operations involving memory regions managed via GPU memory allocators (e.g., cudaMalloc, cudaFree).

  • NOTE: CPE 24.03 is not supported on XD670.

  • NOTE: Beginning in CPE 24.07, the NVHPC modules (PrgEnv-nvhpc, nvhpc, nvhpc-mixed) contain deprecation messages. The NVHPC modules will be removed at a future release in favor of the NVIDIA modules (PrgEnv-nvidia, nvidia, nvidia-mixed). The move towards the NVIDIA modules is to complete the alignment of CPE module flows. The module flow for all environments is as follows:

  • Load an environment meta module (ie. PrgEnv-nvidia)

  • Environment meta module loads a compiler (ie. nvidia)

  • User can choose to load a toolkit (cuda, cudatoolkit)

 Please note that the new “cuda” module is being released as beta. This new module 
 is generated along with CPE’s ROCm toolkit and third-party compiler modules via 
 craypkg-gen.
  • NOTE: In CPE 24.07 a late issue was located on EX254 and EX235n for NVIDIA TCL modules. This issue impacts NVIDIA machines using the TCL modules: NVHPC and CUDA. To resolve this, a site admin should insert the following code snippet at the end of the gen_modulefiles.sh script:

     for cuda_module in ls /opt/modulefiles/cuda/; do
      sed -i "s/$SDK_LEVEL_$MOD_LEVEL/$SDK_LEVEL\_$MOD_LEVEL/g" /opt/modulefiles/cuda/${cuda_module}
     done

     for nvhpc_module in ls /opt/modulefiles/nvhpc/; do
      sed -i -E "s/Linux_(.*64)$/Linux_\1/$NVHPC_LEVEL/" /opt/modulefiles/nvhpc/${nvhpc_module}
     done
 On HPCM system, the script is located in the image root in 
 <IMAGE_ROOT>/etc/cray-pe.d/gen_modulefiles.sh.

CPE 24.07 Product Versions:

NOTE: Use of **** indicates a new or updated component version compared to CPE 24.03.

HPE Cray Programming Environment for HPE Cray Supercomputing EX and HPE Cray Supercomputing Systems with HPCM

Cray Compiling Environment – CCE

    cce 18.0.0 ****

Cray Message Passing Toolkit - CMPT

    cray-mpich 8.1.30 ****

    cray-mpixlate 1.0.5 ****

    cray-dsmml 0.3.0

    cray-pmi 6.1.15 ****

    cray-openshmemx 11.7.2 ****

Application Launch Tools - ALT

    cray-pals 1.3.2

Cray Debugging Support Tools – CDST

    cray-cti 2.18.4 ****

    gdb4hpc 4.16.2 ****

    cray-ccdb 5.0.4 ****

    cray-stat 4.12.3 ****

    atp 3.15.4 ****

    valgrind4hpc 2.13.3 ****

    sanitizers4hpc 1.1.3 ****

    cray-dyninst 12.3.2 ****

    cray-mrnet 5.1.3 ****

Cray Performance Measurement & Analysis Tools – CPMAT

    perftools 24.07.0 ****

    cray-papi 7.1.0.2 ****

Cray Scientific and Math Libraries - CSML

    cray-libsci 24.07.0 ****

    cray-libsci-acc 24.07.0 ****

    cray-fftw 3.3.10.8 ****

Cray Deep Learning Tools

    craype-dl-plugin-ftr 22.06.1.2

    craype-dl-plugin-py3 24.03.1

Cray Environment Setup and Compiling support – CENV

    craypkg-gen 1.3.33 ****

    craype 2.7.32 ****

    cpe-prgenv 8.5.0

    cray-lmod 8.7.37 ****

    cray-modules 3.2.11.7

Third party products

    cray-hdf5 1.14.3.1 ****

    cray-netcdf 4.9.0.13 ****

    cray-parallel-netcdf 1.12.3.13 ****

    cray-python 3.11.7

    cray-R 4.4.0 ****

Third-party products supported

    Totalview 2024.1.21

    Forge 23.1.2

*******************************************************************************************

Certain components, files or programs contained within this package or product are Copyright - 2024 Hewlett Packard Enterprise Development LP. All trademarks used in this document are the property of their respective owners.