source: lmdz_wrf/trunk/WRFV3/README.NMM @ 2795

Last change on this file since 2795 was 1, checked in by lfita, 10 years ago
  • -- --- Opening of the WRF+LMDZ coupling repository --- -- -

WRF: version v3.3
LMDZ: version v1818

More details in:

File size: 11.6 KB
Line 
1
2WRF-NMM Model Version 3.2 (March 31, 2010)
3
4----------------------------
5WRF-NMM PUBLIC DOMAIN NOTICE
6----------------------------
7
8WRF-NMM was developed at National Centers for
9Environmental Prediction (NCEP), which is part of
10NOAA's National Weather Service.  As a government
11entity, NCEP makes no proprietary claims, either
12statutory or otherwise, to this version and release of
13WRF-NMM and consider WRF-NMM to be in the public
14domain for use by any person or entity for any purpose
15without any fee or charge. NCEP requests that any WRF
16user include this notice on any partial or full copies
17of WRF-NMM. WRF-NMM is provided on an "AS IS" basis
18and any warranties, either express or implied,
19including but not limited to implied warranties of
20non-infringement, originality, merchantability and
21fitness for a particular purpose, are disclaimed. In
22no event shall NOAA, NWS or NCEP be liable for any
23damages, whatsoever, whether direct, indirect,
24consequential or special, that arise out of or in
25connection with the access, use or performance of
26WRF-NMM, including infringement actions.
27
28================================================
29
30V3 Release Notes:
31-----------------
32
33This is the main directory for the WRF Version 3 source code release.
34
35- For directions on compiling WRF for NMM, see below or the
36  WRF-NMM Users' Web page (http://www.dtcenter.org/wrf-nmm/users/)
37- Read the README.namelist file in the run/ directory (or on
38  the WRF-NMM Users' page), and make changes carefully.
39
40For questions, send mail to wrfhelp@ucar.edu
41
42Release Notes:
43-------------------
44
45Version 3.2 is released on March 31, 2010.
46
47- For more information on WRF V3.2 release, visit WRF-NMM Users home page
48  http://www.dtcenter.org/wrf-nmm/users/, and read the online User's Guide.
49- WRF V3 executable will work with V3.1 wrfinput/wrfbdy.  As
50  always, rerunning the new programs is recommended.
51
52The Online User's Guide has also been updated.
53================================================
54
55The ./compile script at the top level allows for easy selection of
56NMM and ARW cores of WRF at compile time.
57
58   - Specify your WRF-NMM option by setting the appropriate environment variable:
59
60         setenv WRF_NMM_CORE 1
61           setenv WRF_NMM_NEST 1 (if nesting capability is desired)
62         setenv HWRF 1 (if HWRF coupling/physics are desired)
63
64   - The Registry files for NMM and ARW are not integrated
65     yet. There are separate versions:
66
67         Registry/Registry.NMM         <-- for NMM
68           Registry/Registry.NMM_NEST    <-- for NMM with nesting
69         Registry/Registry.EM          <-- for ARW (formerly known as Eulerian Mass)
70
71
72How to configure, compile and run?
73----------------------------------
74
75- In WRFV3 directory, type:
76
77   configure
78
79  this will create a configure.wrf file that has appropriate compile
80  options for the supported computers. Edit your configure.wrf file as needed.
81
82  Note: WRF requires netCDF library. If your netCDF library is installed in
83        some odd directory, set environment variable NETCDF before you type
84        'configure'. For example:
85
86        setenv NETCDF /usr/local/lib32/r4i4
87
88- Type:
89        compile nmm_real
90       
91- If sucessful, this command will create nmm_real.exe and wrf.exe
92  in directory main/, and the appropriate executables will be linked into
93  the run directories under test/nmm_real, or run/.
94
95- cd to the appropriate test or run directory to run "nmm_real.exe" and "wrf.exe".
96
97- Place files from WPS (met_nmm.*, geo_nmm_nest*)
98  in the appropriate directory, type
99
100  real_nmm.exe
101
102  to produce wrfbdy_d01 and wrfinput_d01. Then type
103
104  wrf.exe
105
106  to run.
107
108- If you use mpich, type
109
110  mpirun -np number-of-processors wrf.exe
111
112=============================================================================
113
114What is in WRF-NMM V3.2?
115
116* Dynamics:
117
118  - The WRF-NMM model is a fully compressible, non-hydrostatic model with a
119    hydrostatic option.
120
121  - Supports One-way and two-way static and moving nests.
122
123  - The terrain following hybrid pressure sigma vertical coordinate is used.
124
125  - The grid staggering is the Arakawa E-grid.
126
127  - The same time step is used for all terms.
128
129  - Time stepping:
130     - Horizontally propagating fast-waves: Forward-backward scheme
131     - Veryically propagating sound waves: Implicit scheme
132
133  - Advection (time):
134     T,U,V:
135      - Horizontal: The Adams-Bashforth scheme
136      - Vertical:   The Crank-Nicholson scheme
137     TKE, water species: Forward, flux-corrected (called every two timesteps)/Eulerian, Adams-Bashforth
138     and Crank-Nicholson with monotonization.
139
140  - Advection (space):
141     T,U,V:
142      - Horizontal: Energy and enstrophy conserving,
143        quadratic conservative,second order
144
145      - Vertical: Quadratic conservative,second order, implicit
146
147      - Tracers (water species and TKE): upstream, positive definite, conservative antifiltering
148        gradient restoration, optional, see next bullet.
149
150      - Tracers (water species, TKE, and test tracer rrw): Eulerian with monotonization, coupled with
151        continuity equation, conservative, positive definite, monotone, optional.  To turn on/off, set
152        the logical switch "euler" in solve_nmm.F to .true./.false.  The monotonization parameter
153        steep in subroutine mono should be in the range 0.96-1.0.  For most natural tracers steep=1.
154        should be adequate.  Smaller values of steep are recommended for idealizaed tests with very
155        steep gradients.  This option is available only with Ferrier microphysics.
156
157  - Horizontal diffusion: Forward, second order "Smagorinsky-type"
158
159  - Vertical Diffusion:
160     See "Free atmosphere turbulence above surface layer" section
161     in "Physics" section given in below.
162
163  - Added a new highly-conservative passive advection scheme to v3.2
164
165Added Operational Hurricane WRF (HWRF) components to v3.2. These enhancements include:
166     - Vortex following moving nest for NMM
167     - Ocean coupling (with POM)
168     - Changes in diffusion coefficients
169     - Modifications/additions to physics schemes (tuned for the tropics)
170         - Updated existing SAS cumulus scheme
171         - Updated existing GFS boundary layer scheme
172         - Added new HWRF microphysics scheme         - Added new HWRF radiation scheme
173     Please see the WRF for Hurricanes webpage for more details:
174         http://www.dtcenter.org/HurrWRF/users
175
176
177* Physics:
178
179  - Explicit Microphysics: WRF Single Moment 5 and 6 class /
180    Ferrier (Used operationally at NCEP.) / Thompson [a new version in 3.1]
181    / HWRF microphysics: (Used operationally at NCEP for HWRF)
182
183  - Cumulus parameterization: Kain-Fritsch with shallow convection /
184    Betts-Miller-Janjic (Used operationally at NCEP.)/ Grell-Devenyi ensemble
185    / Simplified Arakawa-Schubert (Used operationally at NCEP for HWRF)
186
187  - Free atmosphere turbulence above surface layer: Mellor-Yamada-Janjic (Used operationally at NCEP.)
188
189  - Planetary boundary layer: YSU /  Mellor-Yamada-Janjic (Used operationally at NCEP.)
190    / NCEP Global Forecast System scheme (Used operationally at NCEP for HWRF)
191    / GFS / Quasi-Normal Scale Elimination
192
193  - Surface layer: Similarity theory scheme with viscous sublayers
194    over both solid surfaces and water points (Janjic - Used operatinally at NCEP).
195    / GFS / YSU / Quasi-Normal Scale Elimination / GFDL surface layer (Used operationally at NCEP for HWRF)
196
197  - Soil model: Noah land-surface model (4-level - Used operationally at NCEP) /
198    RUC LSM (6-level) / GFDL slab model (Used operationally at NCEP for HWRF)
199
200  - Radiation:
201    - Longwave radiation: GFDL Scheme  (Fels-Schwarzkopf) (Used
202      operationally at NCEP.) / Modified GFDL scheme (Used operationally
203      at NCEP for HWRF) / RRTM
204    - Shortwave radiation: GFDL-scheme (Lacis-Hansen) (Used operationally
205      at NCEP.) / Modified GFDL shortwave (Used operationally at NCEP
206      for HWRF)/ Dudhia
207
208  - Gravity wave drag with mountain wave blocking (Alpert; Kim and Arakawa)
209 
210  - Sea Surface temperature updates during long simulations
211
212* WRF Software:
213
214  - Hierarchical software architecture that insulates scientific code
215    (Model Layer) from computer architecture (Driver Layer)
216  -  Multi-level parallelism supporting distributed-memory (MPI)
217  -  Active data registry: defines and manages model state fields, I/O,
218    nesting, configuration, and numerous other aspects of WRF through a single file,
219    called the Registry
220  - Two-way nesting:
221      Easy to extend: forcing and feedback of new fields specified by
222        editing a single table in the Registry
223      Efficient: 5-8% overhead on 64 processes of IBM
224  - Enhanced I/O options:
225      NetCDF and Parallel HDF5 formats
226      Nine auxiliary input and history output streams separately controllable through the
227       namelist
228      Output file names and time-stamps specifiable through namelist
229  -  Efficient execution on a range of computing platforms:
230      IBM SP systems, (e.g. NCAR "bluevista","blueice","bluefire" Power5-based system)
231      IBM Blue Gene
232      SGI Origin and Altix
233      Linux/Intel
234         IA64 MPP (HP Superdome, SGI Altix, NCSA Teragrid systems)
235         IA64 SMP
236         x86_64 (e.g. TACC's "Ranger", NOAA/GSD "wJet" )
237         PGI, Intel, Pathscale, gfortran, g95 compilers supported
238      Sun Solaris (single threaded and SMP)
239      Cray X1, X1e (vector), XT3/4 (Opteron)
240      Mac Intel/ppc, PGI/ifort/g95
241      NEC SX/8
242      HP-UX
243      Fujitsu VPP 5000
244  - RSL_LITE: communication layer, scalable to very large domains, supports nesting.
245  - I/O: NetCDF, parallel NetCDF (Argonne), HDF5, GRIB, raw binary, Quilting (asynchronous I/O)
246, MCEL (coupling)
247  - ESMF Time Management, including exact arithmetic for fractional
248    time steps (no drift).
249  - ESMF integration - WRF can be run as an ESMF component.
250  -  Improved documentation, both on-line (web based browsing tools) and in-line
251 
252    (Model Layer) from computer architecture (Driver Layer)
253  - Multi-level parallelism supporting shared-memory (OpenMP), distributed-memory (MPI),
254    and hybrid share/distributed modes of execution
255  - Serial compilation can be used for single-domain runs but not for runs with
256    nesting at this time.
257  - Active data registry: defines and manages model state fields, I/O,
258    configuration, and numerous other aspects of WRF through a single file,
259    called the Registry
260  - Enhanced I/O options:
261      NetCDF and Parallel HDF5 formats
262      Five auxiliary history output streams separately controllable through the namelist
263      Output file names and time-stamps specifiable through namelist
264
265  - Testing: Various regression tests are performed on HP/Compaq systems at
266    NCAR/MMM whenever a change is introduced into WRF cores.
267
268  - Efficient execution on a range of computing platforms:
269      IBM SP systems, (e.g. NCAR "bluevista","blueice" and NCEP's "blue", Power4-based system)
270      HP/Compaq Alpha/OSF workstation, SMP, and MPP systems (e.g. Pittsburgh
271         Supercomputing Center TCS)
272      SGI Origin and Altix
273      Linux/Intel
274         IA64 MPP (HP Superdome, SGI Altix, NCSA Teragrid systems)
275         IA64 SMP
276         Pentium 3/4 SMP and SMP clusters (NOAA/FSL iJet system)
277      PGI and Intel compilers supported
278      Alpha Linux (NOAA/FSL Jet system)
279      Sun Solaris (single threaded and SMP)
280      Cray X1
281      HP-UX
282      Other ports under development:
283         NEC SX/6
284         Fujitsu VPP 5000
285  - RSL_LITE: communication layer, scalable to very
286    large domains
287  - ESMF Time Management, including exact arithmetic for fractional
288    time steps (no drift); model start, stop, run length and I/O frequencies are
289    now specified as times and time intervals
290  - Improved documentation, both on-line (web based browsing tools) and in-line
291
292--------------------------------------------------------------------------
Note: See TracBrowser for help on using the repository browser.