hide
You are viewing a Web site, archived on 11:40:03 Oct 15, 2004. It is now a Federal record managed by the National Archives and Records Administration.
External links, forms, and search boxes may not function within this collection.
Parallel Programming Tools
Forecast Systems
Laboratory
Aviation Division
Advanced Computing
Branch
There are several obstacles to the operational use of current distributed
memory HPC systems: portability, programmability, and performance.
Historically NWP models have been large codes that did not run efficiently
on HPC systems without some restructuring. Typically a message
passing library is used such as PVM or MPI to handle parallel programming
issues including inter-process communication, process synchronization,
global operations, data decomposition, and I/O handling. The
development of an efficient message passing standard (currently MPI) supported
by most vendors has improved the portability of models on distributed memory
HPCs. However, MPI is sufficiently low level in nature that
it can be difficult to use. To speed code parallelization, the ACB
has developed a high level tool called the Scalable Modeling System (SMS)
that simplifies the task required to port and run NWS models on HPC's while
offering good scalable performance.
The Scalable Modeling System
The SMS is a directive-based parallelization tool that translates Fortran
code into a parallel version that runs efficiently on both shared and distributed
memory systems including the IBM SP2, Cray T3E, SGI Origin, Sun Clusters,
Alpha Linux clusters and Intel Clusters. This software has been used
sucessfully since 1993 to parallelize and run many oceanic and atmospheric
models, some of which produce weather forecasts for the National Weather
Service.
Models Parallelized using SMS
Atmospheric Models
-
32 km ETA Model
-
National Centers for Environmental Prediction (NCEP)
-
Rapid Update Cycle Model (RUC)
-
NOAA's Forecast Systems Laboratory (NOAA/FSL)
-
Quasi-Non Hydrostatic Model (QNH)
-
NOAA's Forecast Systems Laboratory (NOAA/FSL)
-
Global Forecast System (GFS)
-
Central Weather Bureau (CWB) - Taiwan
-
Typhoon Forecast System (TFS)
-
Central Weather Bureau (CWB) - Taiwan
-
Atmospheric Chemistry Model (NALROM)
-
NOAA's Aeronomy Laboratory (NOAA / AL)
|
Oceanic Models
-
Princeton Ocean Model (POM)
-
NASA Goddard
-
NOAA / Environmental Technology Laboratory (NOAA/ETL)
-
Regional Ocean Modeling System (ROMS)
-
Rutgers University
-
National Institute of Water and Atmospheric Research (NIWA)
- New Zealand
-
University of Alaska at Fairbanks (UAF)
-
NOAA / Pacific Marine Environmental Laboratory (NOAA/PMEL)
-
Hybrid Coordinate Ocean Model (HYCOM)
Los Alamos National Laboratory
|
These models contain structured regular grids that are resolved using
either finite difference approximation or Gauss-Legendre spectral methods.
SMS also provides support for mesh refinement, and can transform data between
grids that have been decomposed differently (eg. grid and spectral space).
While the tool has been tailored toward finite difference approximation
and spectral weather and climate models, the approach is sufficiently general
to be applied to other structured grid codes. As the development
of SMS has matured, the time and effort required to parallelize codes for
MPPs has been reduced significantly. Code parallelization has become
simpler because SMS provides support for advanced operations including
incremental parallelization and parallel debugging.
High Performance: SMS provide a number of performance optimizations.
The SMS run-time libraries have been optimized to speed inter-processor
communications using techniques such as aggregation. Array aggregation
permits multiple model variables to be combined into a single communications
call to reduce message-passing latency. SMS also allows the user
to perform computations in the halo region to reduce communications.
High performance I/O is also provided by SMS. Since atmospheric models
typically output forecasts several times during a model run, SMS can output
these data asynchronous to model execution. These optimization
can lead to significantly faster execution times.
-
Distributed Memory Performance
A performance comparison was done between the hand-coded MPI (32 km
version) Eta model running operationally at NCEP on 88 processors, and
the same Eta model parallelized using SMS. The MPI Eta model was
considered a good candidate for fair comparison since it was (1) parallelized
by IBM for NCEP, (2) is an operational model used to produce daily
weather forecasts for the U.S. National Weather Service and (3) has been
optimized for high performance on the IBM SP2. Fewer than 200 directives
were added to the 19,000 line Eta model during SMS parallelization.
Results of this study show that SMS Eta is 7 percent faster than MPI Eta
on 88 processors of NCEP's IBM-SP2.
-
Shared Memory Performance
Another performance study compared a shared memory version of the Regional
Ocean Modelling System (ROMS) to a recently parallelized SMS version.
In this study we found that the SMS version provided equivalent performance
to the shared-memory version at low numbers of processors and scaled better
to higher numbers of processors. Further analysis of this result
is required and will be presented in an upcoming SMS performance paper.
SMS Documentation
The following documents are available in MS Office 2000 Word or PDF format
Viewing or printing files downloaded in PDF format will require the use
of the freely available Adobe
Acrobat Reader.
-
The Users's Guide
(~134 pages, 530Kb) explains how to use SMS to parallelize Fortran codes.
-
The Reference Manual (~38
pages, 150 Kb) provides complete details about each SMS Directive.
-
There are several papers describing
SMS including an overview
paper which was recently accepted for publication by the Journal of
Parallel Computing.
-
View-graphs from a recent seminar giving an overview of SMS is viewable.
-
View-graphs from a recent SMS training course are viewable.
SMS Software Downloads
SMS is supported on the following systems: IBM-SP2, SGI-Origin, Cray T3E,
Clusters of Alpha, Intel or Sun processors.
SMS software is freely available but there are some restrictions.
To install, gunzip the downloaded file, un-tar it, cd to the unloaded SMS
directory and then follow the instructions in file INSTALL.
Software patches are also available
for user reported bugs. These patch releases have not been tested on all
platforms where SMS is supported. When these changes are integrated into
a full release, they will be tested on all machines.
All questions regarding SMS may be directed to sms-info.fsl@noaa.gov
Additional Information
Once you begin using SMS to parallelize code, the following information
may be useful.
Any additional questions should be directed to sms-info.fsl@noaa.gov
Other Activities and Interests
-
we are following the development of the Earth
System Modeling Framework (ESMF).
-
Applications Performance and Portability
-
hybrid programming using both SMS (distibuted memory) and OpenMP (shared
memory)
-
performance tuning on new machines/architectures.
Prepared by Mark Govett, Mark.W.Govett@noaa.gov
Date of last update: September-2003