- Timestamp:
- Feb 2, 2024, 1:06:35 PM (10 months ago)
- Location:
- BOL/LMDZ_Setup
- Files:
-
- 2 edited
Legend:
- Unmodified
- Added
- Removed
-
BOL/LMDZ_Setup/script_SIMU
r4752 r4796 5 5 # Nombre de processus MPI : 6 6 #SBATCH --ntasks=8 7 # number of MPI processes per node :8 # SBATCH --ntasks-per-node=57 ##### number of MPI processes per node : 40(procs/node on Jean-Zay) / cpus-per-task (ex : =5 for 8 OMP) 8 ####SBATCH --ntasks-per-node=5 # if specified, also add "#SBATCH --nodes= ..." with nodes=ntasks/(ntasks-per-node) 9 9 # nombre de threads OpenMP 10 #SBATCH --cpus-per-task= 510 #SBATCH --cpus-per-task=8 11 11 # de Slurm "multithread" fait bien reference a l'hyperthreading. 12 12 #SBATCH --hint=nomultithread # 1 thread par coeur physique (pas d'hyperthreading) … … 20 20 set -ex 21 21 22 22 # Number of MPI processes : 23 23 ntasks=8 24 nthreads=4 25 # number of OpenMP threads: 26 export OMP_NUM_THREADS=$nthreads 24 # number of OpenMP threads 25 nthreads=8 26 export OMP_NUM_THREADS=$nthreads # for Jean-Zay it would be recommendend to use : 27 # export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK 27 28 # private memory for each thread 28 29 export OMP_STACKSIZE=800M -
BOL/LMDZ_Setup/setup.sh
r4734 r4796 456 456 # Choix du nombre de processeurs 457 457 # NOTES : 458 # omp=8 by default , but we need458 # omp=8 by default (for Jean-Zay must be a divisor of 40 procs/node), but we need 459 459 # omp=1 for SPLA (only MPI parallelisation) 460 460 # omp=2 for veget=CMIP6 beacause of a bug in ORCHIDEE/src_xml/xios_orchidee.f90
Note: See TracChangeset
for help on using the changeset viewer.