[892] | 1 | \documentclass[a4paper,10pt]{article} |
---|
| 2 | %\usepackage{graphicx} |
---|
| 3 | \usepackage{natbib} % si appel à bibtex |
---|
| 4 | %\usepackage[francais]{babel} |
---|
| 5 | %\usepackage[latin1]{inputenc} % accents directs (é...), avec babel |
---|
| 6 | %\usepackage{rotating} |
---|
| 7 | |
---|
| 8 | \setlength{\hoffset}{-1.in} |
---|
| 9 | \setlength{\oddsidemargin}{3.cm} |
---|
| 10 | \setlength{\textwidth}{15.cm} |
---|
| 11 | \setlength{\marginparsep}{0.mm} |
---|
| 12 | \setlength{\marginparwidth}{0.mm} |
---|
| 13 | |
---|
| 14 | \setlength{\voffset}{-1.in} |
---|
| 15 | \setlength{\topmargin}{0.mm} |
---|
| 16 | \setlength{\headheight}{0.mm} |
---|
| 17 | \setlength{\headsep}{30.mm} |
---|
| 18 | \setlength{\textheight}{24.cm} |
---|
| 19 | \setlength{\footskip}{1.cm} |
---|
| 20 | |
---|
| 21 | \setlength{\parindent}{0.mm} |
---|
| 22 | \setlength{\parskip}{1 em} |
---|
| 23 | \newcommand{\ten}[1]{$\times 10^{#1}$~} |
---|
| 24 | \renewcommand{\baselinestretch}{1.} |
---|
| 25 | |
---|
| 26 | \begin{document} |
---|
| 27 | \pagestyle{plain} |
---|
| 28 | |
---|
| 29 | \begin{center} |
---|
| 30 | {\bf \LARGE |
---|
| 31 | Documentation for LMDZ, Planets version |
---|
| 32 | |
---|
| 33 | \vspace{1cm} |
---|
| 34 | \Large |
---|
| 35 | Running the GCM in parallel using MPI |
---|
| 36 | -- Venus |
---|
| 37 | } |
---|
| 38 | |
---|
| 39 | \vspace{1cm} |
---|
| 40 | S\'ebastien Lebonnois |
---|
| 41 | |
---|
| 42 | \vspace{1cm} |
---|
| 43 | Latest version: \today |
---|
| 44 | \end{center} |
---|
| 45 | |
---|
| 46 | |
---|
| 47 | \section{Compilation} |
---|
| 48 | |
---|
| 49 | To compile the GCM for parallel runs using MPI, you need to find the MPI compilor (\textsf{mpif90}) you want to use on your machine. With that knowledge, you can build you own \textsf{arch-$<$your\_architecture$>$.fcm} file in the \textsf{LMDZ.COMMON/arch/} directory. |
---|
| 50 | You can find inspiration, for example, on the \textsf{arch-GNOMEp.fcm} (for the {\em gnome} computation server of UPMC), which uses \textsf{ifort}. |
---|
| 51 | For the LMD local computation machines (e.g. {\em levan}), you can use \textsf{arch-linux-64bit-para.fcm}. |
---|
| 52 | |
---|
| 53 | You also need to have \textsf{netcdf} and \textsf{ioipsl} compiled using the same compilor and main options. The paths to these libraries (and includes) must be written in the \textsf{arch-$<$your\_architecture$>$.path} file. |
---|
| 54 | |
---|
| 55 | An example of command line to compile the Venus GCM using \textsf{makelmdz} is then: |
---|
| 56 | |
---|
| 57 | \textsf{makelmdz -arch $<$your\_architecture$>$ -parallel mpi -d $<$nlon$>$x$<$nlat$>$x$<$nlev$>$ -p venus gcm} |
---|
| 58 | |
---|
| 59 | \section{Run} |
---|
| 60 | |
---|
| 61 | To run the simulation, you have to use the \textsf{mpirun} launcher corresponding to your \textsf{mpif90} compilor. |
---|
| 62 | |
---|
| 63 | The command line is: |
---|
| 64 | |
---|
| 65 | \textsf{mpirun -n $<$number\_of\_procs$>$ gcm.e} |
---|
| 66 | |
---|
| 67 | \section{Outputs} |
---|
| 68 | |
---|
| 69 | Each of the processors used during the run will write its own portion of the \textsf{hist$<$mth/day/ins$>$.nc} files. To gather these portions back into one single file, there is a tool located in the ioipsl directory that you built from the SVN instructions. |
---|
| 70 | |
---|
| 71 | This tool is called \textsf{rebuild} and is found in the \textsf{ioipsl/modipsl/bin/} directory. |
---|
| 72 | |
---|
| 73 | To use it, the command line is: |
---|
| 74 | |
---|
| 75 | \textsf{rebuild -f -o $<$name\_of\_final\_file$>$.nc hist$<$mth/day/ins$>$\_*.nc} |
---|
| 76 | |
---|
| 77 | \end{document} |
---|