Changeset 1424 for trunk/LMDZ.GENERIC
- Timestamp:
- May 7, 2015, 5:38:31 PM (10 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/LMDZ.GENERIC/DOC/run.tex
r1413 r1424 135 135 \end{verbatim} 136 136 137 \section{Compiling the model}137 \section{Compiling the LMDZ.GENERIC model (sequential only)} 138 138 \label{sc:run1} 139 139 … … 193 193 makegcm -d 32x32x20 -p std -O "-g -fpe0 -traceback" gcm 194 194 \end{verbatim} 195 195 %********** 196 \section{Compiling the LMDZ.COMMON model (sequential or parallel)} 197 \label{sc:run1_common} 198 \begin{enumerate} 199 \item Prerequisites: 200 \begin{itemize} 201 \item[$\bullet$] Downloaded LMDZ.COMMON and LMDZ.OTHER\_MODEL containing the physic you want. 202 \item[$\bullet$] Available MPI library and wrapped compiler (mpif90, mpiifort,...) 203 \item[$\bullet$] Optional (but recommended) fcm: 204 \begin{itemize} 205 \item LMD: /distrib/local/fcm/bin 206 \item Ciclad: /home/millour/FCM\_V1.2/bin 207 \item Gnome: /san/home/millour/FCM\_V1.2/bin 208 \item Other: fcm is just a collection of perl scripts; can be copied over on any other machine, or simply downloaded using svn:\\ 209 svn checkout http://forge.ipsl.jussieu.fr/fcm/svn/PATCHED/FCM\_V1.2 210 \end{itemize} 211 \end{itemize} 212 \item Then choose the physic you want to couple with the LMDZ.COMMON dynamic core by creating a symbolic link in the LMDZ.COMMON/libf directory.\\ 213 If you want to use mars physic: 214 \begin{verbatim} 215 cd LMDZ.COMMON/libf 216 ln -s path/to/LMDZ.MARS/libf/phymars . 217 ln -s path/to/LMDZ.MARS/libf/aeronomars . 218 \end{verbatim} 219 Here, we want the LMDZ.GENERIC physic phystd: 220 \begin{verbatim} 221 cd LMDZ.COMMON/libf 222 ln -s path/to/LMDZ.GENERIC/libf/phystd . 223 \end{verbatim} 224 \item To compile in LMDZ.COMMON directory: 225 \begin{verbatim} 226 ./makelmdz_fcm -s XX -t XX -d LONxLATxALT -b IRxVI -p physicSuffix 227 -arch archFile [-parallel mpi/mpi_omp] gcm 228 \end{verbatim} 229 \begin{itemize} 230 \item[$\bullet$] \textbf{physicSuffix} is \verb|mars| for phymars, \verb|std| for phystd... 231 \item[$\bullet$] \textbf{archFile} is the name of configuration files from LMDZ.COMMON/arch: use \verb|CICLADifort| the ifort compiler in a CICLAD environment, \verb|X64_ADA| for the ADA architecture... 232 \item[$\bullet$] To compile in parallel with mpi, add \verb|-parallel mpi| option. By default it is serial code. 233 \item[$\bullet$] For hybrid MPI-OpenMP parallelisation, add \verb|-parallel mpi_omp| option. 234 \item[$\bullet$] For faster compilation, the option \verb|-j N| uses N simultaneous tasks. 235 \item[$\bullet$] \verb|-full| option forces full (re)-compilation from scratch. 236 \item[$\bullet$] Created program is in LMDZ.COMMON/bin directory, with dimensions included in the program name. e.g.: gcm\_64x48x29\_phymars\_para.e 237 \end{itemize} 238 \end{enumerate} 239 NB: It is possible to compile without fcm by replacing \verb|makelmdz_fcm| by \verb|makelmdz|. Created program is in LMDZ.COMMON directory and named gcm.e. 240 %********** 196 241 \section{Input files (initial states and def files)} 197 242 {\bf -} In directory \verb+LMDZ.GENERIC/deftank+ … … 224 269 225 270 [NOTE: WITH THE GENERIC MODEL WE ALMOST ALWAYS START FROM ``startplanet'' FILES] 226 271 %********** 227 272 \section{Running the model} 228 273 \begin{figure} … … 232 277 \end{figure} 233 278 279 IMPORTANT: The following line MUST be in file run.def (or callphys.def): 280 \begin{verbatim} 281 planet_type = mars 282 \end{verbatim} 283 for using LMDZ.MARS model or 284 \begin{verbatim} 285 planet_type = generic 286 \end{verbatim} 287 for using LMDZ.GENERIC model. 288 289 \begin{itemize} 290 \item[$\bullet$] To run the serial {\bf gcm.e} interactively:\\ 234 291 Once you have the program {\bf gcm.e}, 235 292 input files {\bf start.nc} {\bf startfi.nc}, … … 240 297 \end{verbatim} 241 298 299 You might need more memory. Use \verb|ulimit -s unlimited| to change user limits.\\ 242 300 You might also want to keep all messages and diagnostics written to standard 243 301 output (i.e. the screen). You should then redirect the standard output … … 253 311 254 312 313 \item [$\bullet$] To run the MPI-parallel {\bf gcm.e} interactively: 314 \begin{verbatim} 315 mpirun -np N gcm.e > gcm.out 2>&1 316 \end{verbatim} 317 \verb|-np N| specifies the number of procs to run on.\\ 318 IMPORTANT: one MUST use the \verb|mpirun| command corresponding to the \verb|mpif90| compiler specified in the \verb|arch| file.\\ 319 Output files (restart.nc, diagfi.nc ,etc.) are just as when running in serial. But standard output messages are written by each process.\\ 320 If using chained simulations (run\_mcd/run0 scripts), then the command line to run the gcm in \verb|run0| must be adapted for local settings.\\ 321 NB: LMDZ.COMMON dynamics set to run in double precision, so keep \verb|NC_DOUBLE| declaration (and real to double precision promotion) in the arch files. 322 \item [$\bullet$] To run the hybrid parallel {\bf gcm.e} interactively: 323 \begin{verbatim} 324 export OMP_NUM_THREADS=2 325 export OMP_STACKSIZE=2500MB 326 mpirun -np 2 gcm.e > gcm.out 2>&1 327 \end{verbatim} 328 In this exemple, each of the 2 process MPI have 2 OpenMP tasks with a 2500MB memory. 329 \item[$\bullet$] To run the MPI-parallel {\bf gcm.e} with a job scheduler (different on each machine): 330 \begin{verbatim} 331 PBS example (on Ciclad): 332 #PBS -S /bin/bash 333 #PBS -N job_mpi08 334 #PBS -q short 335 #PBS -j eo 336 #PBS -l "nodes=1:ppn=8" 337 # go to directory where the job was launched 338 cd $PBS_O_WORKDIR 339 mpirun gcm_64x48x29_phymars_para.e > gcm.out 2>&1 340 \end{verbatim} 341 \begin{verbatim} 342 LoadLeveler example (on Gnome): 343 # @ job_name = job_mip8 344 # standard output file 345 # @ output = job_mpi8.out.$(jobid) 346 # standard error file 347 # @ error = job_mpi8.err.$(jobid) 348 # job type 349 # @ job_type = mpich 350 # @ blocking = unlimited 351 # time 352 # @ class = AP 353 # Number of procs 354 # @ total_tasks = 8 355 # @ resources=ConsumableCpus(1) ConsumableMemory(2500 mb) 356 # @ queue 357 set -vx 358 mpirun gcm_32x24x11_phymars_para.e > gcm.out 2>&1 359 \end{verbatim} 360 \begin{verbatim} 361 LoadLeveler example (on Ada): 362 module load intel/2012.0 363 # @ output = output.$(jobid) 364 # @ error = $(output) 365 # @ job_type = parallel 366 ## Number of MPI process 367 # @ total_tasks = 8 368 ## Memory used by each MPI process 369 # @ as_limit = 2500mb 370 # @ wall_clock_limit=01:00:00 371 # @ core_limit = 0 372 # @ queue 373 set -x 374 poe ./gcm.e -labelio yes > LOG 2>&1 375 \end{verbatim} 376 \item[$\bullet$] To run the hybrid MPI/OpenMP-parallel {\bf gcm.e} with a job scheduler (different on each machine): 377 \begin{verbatim} 378 LoadLeveler example (on Gnome): 379 # @ job_name = job_mip8 380 # standard output file 381 # @ output = job_mpi8.out.$(jobid) 382 # standard error file 383 # @ error = job_mpi8.err.$(jobid) 384 # job type 385 # @ job_type = mpich 386 # @ blocking = unlimited 387 # time 388 # @ class = AP 389 # Number of procs 390 # @ total_tasks = 8 391 # @ resources=ConsumableCpus(1) ConsumableMemory(5000 mb) 392 # @ queue 393 set -vx 394 export OMP_NUM_THREADS=2 #sinon par defaut, lance 8 threads OpenMP 395 export OMP_STACKSIZE=2500MB 396 mpirun gcm_32x24x11_phymars_para.e > gcm.out 2>&1 397 \end{verbatim} 398 IMPORTANT: ConsumableMemory must be equal to OMP\_NUM\_THREADSxOMP\_STACKSIZE.\\ 399 In this case, we are using 8x2 cores. 400 \begin{verbatim} 401 LoadLeveler example (on Ada): 402 module load intel/2012.0 403 # @ output = output.$(jobid) 404 # @ error = $(output) 405 # @ job_type = parallel 406 ## Number of MPI process 407 # @ total_tasks = 8 408 ## Number of OpenMP tasks attached to each MPI process 409 # @ parallel_threads = 2 410 ## Memory used by each MPI process 411 # @ as_limit = 5gb 412 # @ wall_clock_limit=01:00:00 413 # @ core_limit = 0 414 # @ queue 415 set -x 416 export OMP_STACKSIZE=2500MB 417 poe ./gcm.e -labelio yes > LOG 2>&1 418 \end{verbatim} 419 IMPORTANT: In this case, each core needs 2.5gb and we are using 2 OpenMP tasks for each MPI process so $\verb|as_limit|=2 \times 2.5$. 420 \end{itemize} 421 %********** 255 422 \section{Visualizing the output files} 256 423
Note: See TracChangeset
for help on using the changeset viewer.