FSL in Review

Introduction

Administration and Research

Forecast Research

Facility

Demonstration

Systems Development

Aviation

Modernization

International

Publications

Acronyms and Terms

Contact The Editor

Design:
Wilfred von Dauster

Facility Division

Objectives

The Facility Division (FD) manages the computers, communications and data networks, and associated peripherals that FSL staff use to accomplish their research and systems-development mission. The FSL Central Facility comprises 60 Sun Microsystems, Inc., Silicon Graphics, Inc. (SGI), and Hewlett-Packard (HP) computers ranging from workstations and servers to a supercomputer-class High Performance Technologies, Inc. (HPTi) massively parallel High-Performance Computer System (HPCS). The facility also contains a variety of meteorological data-ingest interfaces, storage devices, local- and wide-area networks, communications links to external networks, and display devices. Over 675 Internet Protocol (IP)-capable hosts and network devices serve the FSL divisions, including 200 Unix hosts, 277 Alpha Linux nodes in the HPCS, 146 PCs and Macintoshes, 31 X-terminals, and 23 network routers, hubs, and switches. This hardware and associated software enable FSL staff to design, develop, test, evaluate, and transfer to operations advanced weather information systems and new forecasting techniques.

The division designs, develops, upgrades, administers, operates, and maintains the FSL Central Computer Facility. For the past 20 years, the facility has undergone continual enhancements and upgrades in response to changing and expanding FSL project requirements and new advances in computer and communications technology. In addition, FD lends technical support and expertise to other federal agencies and research laboratories in meteorological data acquisition, processing, storage, telecommunications, and networking.

The Central Facility acquires and stores a large variety of conventional (operational) and advanced (experimental) meteorological observations in real time. The ingested data encompass almost all available meteorological observations in the Front Range of Colorado and much of the available data in the entire United States. Data are also received from Canada, Mexico, and the rest of the world. The richness of these meteorological data is illustrated by such diverse datasets as advanced automated aircraft, wind profiler, satellite, Global Positioning System (GPS) moisture, Doppler radar measurements, and hourly surface observations. The Central Facility computer systems (Figure 19) are used to analyze and process these data into meteorological products in real time, store the results, and make the data and products available to researchers, systems developers, and forecasters. The resultant meteorological products cover a broad range of complexity, from simple plots of surface observations to meteorological analyses and model prognoses generated by sophisticated mesoscale computer models.

Accomplishments

Computer Facility

Division staff led the planning and implementation of the relocation of FSL’s computers, network infrastructure, and communication lines to the David Skaggs Research Center (DSRC) in the spring of 1999. Every effort was made to keep facility downtime to a minimum. For example, critical systems were not moved until their backups were relocated to the new building and placed into operation. Desktop systems were moved during weekends so that users would experience negligible (or no) outage. To determine optimal use of space, well-planned layouts included the location of every computer in the main FSL computer room and the smaller auxiliary computer rooms. Also, the programming and location of the door-entry security scramble pads were carefully planned.

Key FD staff served on the evaluation team and helped choose FSL's new High-Performance Computing System (HPCS). The contract was awarded last September to High Performance Technologies, Inc. (HPTi). The HPCS consists of a 277-node Alpha-Linux cluster using 667-MHz Compaq processors, a 100-terabyte Mass Store System, and a 500-gigabyte Storage Area Network (SAN). The system will be substantially upgraded in late 2000 and in 2002.

The on-line storage capacity of the FSL Auspex Network File System (NFS) server was enhanced. A second cabinet, storage processor, and disk drives were added, which increased the total storage capacity to 200 gigabytes. Also, the Redundant Array of Independent Disks (RAID) Level 5 technology was implemented to facilitate more efficient and reliable use of the NFS server disks and improve access to on-line NIMBUS and NOAAPORT data on /public.

By the end of the year, the FSL Mass Store System (MSS) stored over 20 terabytes of meteorological data, products, and user files. Division Systems Administration staff, in coordination with vendors, found solutions and/or work-arounds to numerous UniTree software and MSS hardware problems.

The first phase of the FSL Hardware Assets Management System (HAMS) development was completed. Based on an Oracle database management system, HAMS provides storage, maintenance, and retrieval of detailed records on each piece of FSL equipment and software. The system contains vendor, warranty, and support contact information for each asset, and allows multiple levels of input, viewing, and searching to track equipment moves, upgrades, and reconfigurations. It also provides vital statistics and attributes about FSL hardware and software to management, technical support staff, and developers, and soon will provide accurate information on equipment and software maintenance to the FSL Office of Administration and Research. Platform-independent Web browsers serve as the primary HAMS interface and offer extensive query capabilities to satisfy a wide variety of day-to-day requests for asset information and maintenance.

figure 19

Figure 19. View of the main FSL computer room containing the Central Facility equipment.

The FSL system administrators continued to support numerous Unix operating systems including HP-UX, IBM AIX, SCO Unix, SGI IRIX, Red Hat Linux, and Sun Solaris and SunOS. Apple Mac System 7, and Microsoft Windows 3.X, Windows 95, 98, and Windows NT were also supported. The operating systems and commercial applications software used by FSL developers were periodically upgraded as new versions became available from vendors. Additional utility, productivity, and tool-type open-source software packages were installed on FSL servers and made available for laboratory-wide use.

FSL Network

The FSL Network Team spent most of Fiscal Year 1999 preparing for and moving to the new David Skaggs Research Center. Network and power cables were set up in FSL’s new main computer room, and network drops were prepared for all FSL offices. The networks in the new and old buildings were connected by a wireless DS-3 45-Megabit per second (Mbps) network link that allowed simultaneous operation in both locations and provided uninterrupted network access during the move. These preparations resulted in a smooth and efficient transition to the new building.

With the move to the DSRC, the FSL network advanced from primarily shared 10-Mbps Ethernet desktop connectivity to switched (nonshared) 10- and 100-Mbps connectivity. This in turn increased the demand on the new Asynchronous Transfer Mode (ATM) backbone network. To alleviate further backbone congestion with installation of the new FSL High Performance Computer System, the FSL network was augmented by two new 10-Gigabit per second (Gbps) ATM switches. Figure 20 shows a simplified schematic of the current FSL network.

figure 20

Figure 20. Schematic of the current FSL network.

The move and new network equipment allowed the FSL network to provide redundant Internet Protocol (IP) routing within and outside of the laboratory. All IP routers now have backup routers that automatically route traffic if one of the primary routers fails. Recent upgrades were made that allow concurrent use of multiple Internet Service Providers (ISP). The FSL T-1 (1.5 Mbps) connection to Cable & Wireless ISP is shared with NOAA-Boulder’s 6-Mbps Factional T-3 ISP. This arrangement provides rapid, nearly transparent failover capability in case one of the ISP connections is lost.

FSL’s network dial-in capabilities were augmented with three toll-free 800 lines, significantly enhancing network access for FSL employees on travel.

Collaborative efforts continued between Network and System Administration staff toward improving awareness and preparedness for computer and network security. FSL participates in NOAA-Boulder groups to further these efforts both within and outside of FSL. Enforcement of security policies was also improved with the implementation of tools that help detect intrusion attempts.

Data Acquisition, Processing, and Distribution

The Data Systems Group continued to support the real-time meteorological data-acquisition and processing systems within the Central Facility. Multiple computers operate in a distributed, event-driven, real-time environment, known as the Networked Information Management client-Based User Service (NIMBUS), to acquire, process, store, and distribute conventional and advanced meteorological data. NIMBUS consists of a server, called the Cloud Server (shown in Figure 21), application clients, and acquisition clients (discussed later). Since NIMBUS operates on multiple computer platforms, it is platform-independent. The platforms and more than 60 software components comprise the seven primary NIMBUS ingest, processing, storage, and distribution computers. NIMBUS acquires and distributes over 100 meteorological products, and processes daily about 30 GB of conventional and advanced meteorological data. The operational status of most of these products, and that of the NIMBUS system as a whole, is monitored closely.

NIMBUS receives data from many sources, including the National Weather Service (NWS) distribution networks such as NOAAPORT and the High-resolution Data Services (HDS); commercial vendors such as Aeronautical Radio Inc. (ARINC) and Weather Services International (WSI) Corporation, which provide many data products including ARINC Communications Addressing and Reporting System (ACARS) data and aircraft pilot reports (PIREPs), respectively, and radar products; and direct data originators such as the FSL Profiler Control Center for profiler data, the National Centers for Environmental Prediction (NCEP) for model grids, the National Environmental Satellite, Data and Information Service (NESDIS) for Geostationary Operational Environmental Satellite (GOES) data, and the NWS for WSR-88D (Doppler) radar data.

figure 21

Figure 21. Schematic of NIMBUS showing routing by the Cloud Server.

Customer support for Central Facility datasets was provided to FSL scientists and developers who use the data in various modeling, application, and workstation development activities. Real-time NIMBUS data were also distributed to several organizations external to FSL using Local Data Manager (LDM) protocol developed by the University Corporation for Atmospheric Research (UCAR) Unidata Program. Distributed datasets included GOES imagery to the NOAA Climate Diagnostics Center (CDC), wind profiler data to the Unidata Internet Data Distribution (IDD), ACARS data to the UCAR Joint Office for Science Support (JOSS), quality controlled ACARS data to many organizations (such as universities) and the National Center for Atmospheric Research (NCAR) Research Applications Program (RAP). A schematic of the NIMBUS data flow is shown in Figure 22.

Data format translators and storage routines were upgraded to handle new data formats. The ACARS software was modified to handle new Federal Express weather reports and the United Airlines over-water data format. Software to translate and store maritime observation reports was developed to replace legacy buoy data processing software. Several Real-Time Verification System (RTVS) datasets were made available through NIMBUS for the FSL Aviation Division, including turbulence algorithm outputs, significant meteorological reports (SIGMETs) and Convective SIGMETs, National Lightning Detection Network (NLDN) data, and hourly precipitation data. A new version of the MAPS Surface Analysis System (MSAS) was installed on NIMBUS in support of FSL’s American Mesoscale Experiment (FAME) and North American Atmospheric Observing System (NAOS) projects.

Upgrades to the Central Facility subsystems included the installation of a networked X.25 communications protocol interface system and development of aUnix-based software to ingest data into NIMBUS. This system replaced legacy VAX/VMS hardware and software for receiving ACARS data and NWS Telecommunications Gateway Direct Connect Service (DCS) data. Another upgrade included replacement of the legacy VAX/VMS system with the National Lightning Data Network subsystem software to use data received through NOAAPORT.

A software upgrade significantly improved data transfer reliability in the acquisition, translation, and storage of RUC-2 data from NCEP. Higher resolution (32-km) Eta model netCDF files, primarily used for initializing RUC-2 model runs, are now available to FSL users.

Enhancements were made to the FSL Data Repository (FDR) storage subsystem to include new datasets, in addition to NOAAPORT data. The FDR data retrieval capability was implemented to support several methods of providing data retrospectively, including the retrospective NIMBUS product and WFO-Advanced (Weather Forecast Office advanced workstation prototype) case-generation functionality.

figure 22

Figure 22. Schematic of NIMBUS data flow.

Development began on an advanced, centralized metadata database pertaining to meteorological datasets, such as observing instrument characteristics, station location (latitude, longitude, and altitude), etc. Metadata aid scientists and developers in correctly interpreting and managing large quantities of meteorological data descriptors and associated annotations. Metadata are also used extensively throughout the meteorological data acquisition, translation, storage, and utilization process.

Work continued in transforming NIMBUS from a client-server architecture based on functional decomposition to one based on distributed objects. An interprocess communications method using Common Object Request Broker Architecture (CORBA) was prototyped. Progress was made toward improving system maintainability with the design and development of object-oriented internal database software, as well as analysis of NIMBUS data translation methods.

To ensure that NIMBUS users continue to have access to reliable data, NIMBUS software was completely analyzed to examine susceptibility to Year 2000 (Y2K) problems. A list of files requiring modification was identified, corrected, and tested before integration into NIMBUS. In addition, sample Y2K test datasets with artificial time and date values spanning the year 2000 boundary were prepared for testing by FSL application developers who use Central Facility data.

Laboratory Project, Research, and External Support

The Facility Division continued to distribute real-time and retrospective data and products to all internal FSL projects and numerous outside groups and users. External recipients included:

  • Two NOAA Oceanic and Atmospheric Research (OAR) laboratories: the Environmental Technology Laboratory (ETL) was provided grid and text data, WSR-88D radar data, and Denver Urban Drainage and Flood Control District (UDFCD) Alert data; and the Climate Diagnostics Center (CDC) was provided real-time GOES-8 and 10 extended-sector satellite data in support of the Pan-American Climate Studies (PACS) program.
  • NWS Aviation Weather Center in Kansas City.
  • UCAR COMET and Unidata Program Center.
  • NCAR RAP and Mesoscale and Microscale Meteorology (MMM) Division.

To support FSL’s fall 1999 D3D (advanced three-dimensional workstation display) Evaluation Exercise, three new AWIPS review cases were generated: the Colorado/Wyoming Supercell, 26–27 June 1999; the Salt Lake City Tornado, 11 August 1999; and Hurricane Dennis, 4 September 1999. Three existing cases (the Oklahoma Tornado Outbreak, 3 May 1999; the Fort Collins Flash Flood, 28 June 1997; and the Colorado Blizzard, 24 October 1997) were also staged for use during the evaluation exercise. During each of four evaluation phases, six cases could be swapped on and off the review workstation disk to meet the tight case evaluation schedule.

In conjunction with the Forecast Research Division, FD implemented RUC-2 model backup capability for NCEP, where it resides as part of the suite of operational weather models run by NWS. FSL-generated RUC-2 grids were continuously transmitted to the NWS Office of System Operations (OSO); staff there are able to switch to the FSL grids and distribute them to the operational forecast offices and centers in case of system problems at NCEP. The value of this capability was proven when a fire destroyed the main NCEP Cray computer late last year.

The division initiated collaboration with the NOAA National Climatic Data Center (NCDC) to develop a NOAAPORT data archive for the NWS. FD staff supported project development staff at NCDC, which archives all NOAAPORT data, in setting up their NOAAPORT receiver system. FSL made available to NCDC its NOAAPORT receiver system software, which has been running successfully at FSL more than a year.

FD worked with the Forecast Research Division in providing, upon request, quality controlled ACARS (known as QC ACARS) data through the FSL Web server to the university community and other NOAA agencies.

A WFO-Advanced Data Server was installed in the Central Facility to provide a source of real-time, production NWS Advanced Weather Interactive Processing System (AWIPS) datasets to FSL users. A second AWIPS Data Server was also set up to provide data to the FX-Net workstation systems installed at Plymouth State College (PSC) in New Hampshire. Acquisition of local radar products for PSC has been established within the Central Facility.

Besides the datasets mentioned above, FD provided outside groups with other data and products, which included Doppler radar, upper-air soundings, routine meteorological aviation reports (METARs), profiler, satellite imagery and soundings, and MAPS and LAPS grids. Archived NOAAPORT data were provided to the UCAR Cooperative Program for Operational Meteorology, Education and Training (COMET) for use in its Mesoscale Analysis and Prediction (COMAP) courses. Operations staff served as liaison for outside users, providing them information on data availability, system status, modifications, and upgrades.

The Data Systems Group conducted quarterly Central Facility task-prioritization meetings to ensure that development efforts within the division responded to all FSL requirements. The FSL Director, division chiefs, project leaders, and other interested parties were invited to review and discuss with the lead FD developers the status of all Central Facility tasks, including data acquisition, processing, storage, NIMBUS, and related facility development efforts. These meetings resulted in a prioritized task list on the Web, ensuring that FD development activities were carried out in accordance with FSL management, project, and user requirements.

Technical advice was provided to FSL management on optimal use of laboratory computing and network resources, and staff participated either as chair or members on the following committees:

  • Chaired the FSL Technical Steering Committee (FTSC) that developed an extensive plan for the move to the new David Skaggs Research Center in the spring of 1999.
  • Served on the FSL Technical Review Committee.
  • Served as Core Team and Advisory Team members, assisting in decisionmaking for procurement of the FSL High-Performance Computing System.

FD staff also participated in cross-cutting activities that extended beyond FSL, as follows:

  • Served as vice chair of the OAR Technical Committee for Computing Resources (TCCR) and named chair of the TCCR High-Performance Computing Working Group.
  • Served on the Department of Commerce Boulder Laboratories Network Working Group.
  • Served as member of the NOAA High-Performance Computing Study Team.

The Operations staff supported the real-time Central Facility for 16 hours a day, seven days a week. They used the Facility Information and Control System (FICS) to monitor the data-acquisition systems, NIMBUS, and the associated hardware and software. The operators took corrective action when problems occurred, rebooted machines and/or restarted software as necessary, and referred unresolved problems to the appropriate system administrators, network staff, or system developers.

In support of the FSL user community, the operators answered routine facility, data, and systems-related questions, and performed the following specific tasks:

  • Oversaw the daily laboratory-wide computer system backups amounting to 400 GB of data.
  • Serviced approximately 60 user requests for data compilations, file restoration, and account management.
  • Created a Web database of more than 60 pages documenting the procedures for maintaining the Central Facility real-time datasets.
  • Performed an FSL-wide inventory of over 400 hardware components for the database project.
  • Assisted in facilitating approximately 30 video teleconferences.

FD electronics technicians performed numerous tasks associated with equipment setups, network connections, and PC support.

Projections

Computer Facility

The focus of the division will center on preparing the FSL computer room and needed infrastructure for installation of the High-Performance Computer System (HPCS) in late 1999 (Figure 23) and acceptance testing through early 2000. A core team consisting of system administrators and network staff will concentrate on these tasks. The new machine will be made available to FSL users in March 2000 and to outside users soon after. Policies and procedures for the operation and use of the HPCS will be established. Planning for the first major upgrade of the HPCS will begin late in the year. Figure 24 shows the new Mass Store System associated with the HPCS.

figure 23

Figure 23. FSL's new High-Performance Computer System.

A new version of FSLHelp will be implemented that is based on Bugzilla software, developed by the The Mozilla Organization. Through a Web-based interface, the new help system will allow users to submit requests to operators, system administrators, and network staff. The system will assign requests to the appropriate technical staff and allow efficient prioritization, scheduling, and tracking of each request.

A new advanced system monitoring tool based on the Big Brother software will be created. The system will consist of local clients that test system conditions and network availability and send status reports to one or more display servers to notify operators and/or system administrators about system problems. Local system clients will monitor disk space, CPU usage, messages, and ascertain that important processes are up and running.

The following facility upgrades also will be accomplished:

  • An upgraded central laboratory server will be implemented for more efficient hosting of Web pages, documentation, and user mail services.
  • The Facility Information and Control System will be enhanced by adding new data sources and providing more detailed monitoring information for existing real-time systems.
  • The first Linux systems to be used by FD developers will be placed into operation.
  • Development of the FSL Data Repository will continue.
  • The FSL Hardware Assets Management System will be placed into operation, and a consolidated hardware/software database will be established.
  • Off-site storage of FSL system and software backup tapes will be initiated.

figure 24

Figure 24. FSL's new Mass Store System.

FSL Network

FD had planned to connect FSL to the very high-performance Backbone Network Service (vBNS) last year, but since then, the vBNS has been changed from a research to a commercial network (now called the vBNS+). Due to the expense of this service, FSL will work instead with the NOAA-Boulder Network Team and NCAR to connect to the Abilene research network (http://www.internet2.org) to obtain high-speed external network access. Figure 25 shows the planned external network connections.

The FSL Network Team will begin an upgrade to replace the Ethernet switches and routers that have been either unreliable or are no longer supported by vendors. The current Ethernet switches purchased by NOAA-Boulder and FSL have proven to be less reliable than FSL requires, and will have to be replaced. Because the FSL ATM router interfaces are fully loaded, additional interfaces will need to be added to spread the load and provide higher reliability and throughput in the FSL network.

An upgrade of the FSL network’s remote access capabilities will be initiated. The dial-in server currently at FSL provides 33.6 Kbps analog modem and 128 Kbps Integrated Services Digital Network (ISDN) connectivity. This will be upgraded to provide 56 Kbps V.90 modem access, as well as high-speed Digital Subscriber Line (DSL) capability. A Virtual Private Network (VPN) server will be added that will allow secure access to the FSL network from anywhere on the Internet.

Data Acquisition, Processing, and Distribution

Several activities are scheduled for completion in Fiscal Year 2000. To improve our capability to access grid data from NCEP, the DBNet data-transfer mechanism will be implemented. DBNet is a distributed data communications and processing system used to build a reliable, scalable, and maintainable data flow infrastructure.

Enhancements will be made to the Central Facility NOAAPORT data-acquisition system software. The capability to decode encrypted national radar data will be developed. The fourth NOAAPORT channel providing GOES Data Collection Platform (DCP) data and non-GOES satellite imagery will be accessed and made available to FSL users. Planned hardware upgrades include the procurement and installation of a second Litton PRC (AWIPS contractor) NOAAPORT Receive System (NRS) Communications Processor and a new acquisition/fanout processor.

To supplement the ACARS datasets, FD will acquire and process the international Aircraft Meteorological Data Relay (AMDAR) datastream, transmitted primarily from Europe and Eastern Asia.

The first phase of the metadata information management system will be completed. Requirements analysis, design, and implementation of a flexible metadata database using the Oracle database management system will be accomplished for grid data. The overall goal is to provide a comprehensive metadata database that will facilitate accurate and complete real-time and retrospective data processing. Techniques used in creating the metadata database will be shared with NCDC and other NOAA research laboratories. More metadata information will be gathered for all data to be stored in the FDR.

FD plans to participate in the Unidata Cooperative Opportunity for NCEP Data Using IDD Technology (CONDUIT) project, which is intended to distribute high-resolution NCEP forecast model datasets. A Linux-based server will be procured to meet the CONDUIT requirements. The Data Systems Group will analyze the model data sent via CONDUIT to determine which datasets will best serve FSL’s needs. They will also help assess the viability of the FSL CONDUIT server to function as a point distributor to other users.

Laboratory Project, Research, and External Support

Information on the Central Facility capabilities will continue to be provided to FSL and outside users, including the NWS, FAA, UCAR, NCAR, universities, and other OAR laboratories and NOAA offices. Their requests for support, advice, and data will be coordinated with appropriate staff. Following FSL management approval, FD will provide real-time and retrospective data to researchers at these organizations.

In support of FSL projects, several datasets will be acquired in the next fiscal year. The NCEP Aviation Weather Center algorithm outputs (including Neural Net Icing, Vertical Velocity Icing, and Vertical Velocity Storm) will be acquired for the Real-Time Verification System (RTVS) project.

As part of FD’s continuing support of FAME and NAOS, plans are underway to provide additional Local Data Acquisition and Dissemination (LDAD) mesonet data required by these projects.

The division will continue collaboration with NCDC and COMET in the development of the prototype NOAAPORT data archive system. Technical assistance and software will be provided to NCDC for their NOAAPORT Data Archive and Retrieval System (NDARS) development effort. The possibility of setting up a NOAAPORT receiver system backup capability between FSL and NCDC also will be explored.

Of special note is the continuation of the FSL RUC-2 backup for NCEP. The RUC-2 backup generation and transmission mechanism will be transferred from the FSL SGI Origin 2000 computer to the new HPCS, with special attention given to maintaining uninterrupted transmission of RUC-2 grids to the NWS Office of Systems Operations.

figure 25

Figure 25. Future external connections for the FSL network.


FSL Staff