BNL Home
RHIC ATLAS Computing Center

Brookhaven Lab has a strong history in the successful operation of large-scale computational science, data management, and analysis infrastructure. This expertise and tools developed at the Lab have been key factors in the success of the scientific programs at RHIC, NSLS-II, and CFN—all DOE Office of Science User Facilities—and also in biological, atmospheric, and energy systems science.

These capabilities have also been a crucial part of the Lab's participation in international research collaborations like the ATLAS experiment at the LHC, and will help make the case for building a future electron ion collider (EIC) at Brookhaven.

Computing Expertise

One example of Brookhaven’s computing expertise is the RHIC & ATLAS Computing Facility (RACF). Formed in 1997 to support experiments at RHIC, Brookhaven’s flagship particle collider for nuclear physics research, the RACF is now at the center of a global computing network connecting more than 2,500 researchers around the world with the data from RHIC and the ATLAS experiment at the Large Hadron Collider in Europe. This world-class center houses an ever-expanding farm of computing cores (50,000 as of 2015), receiving data from the millions of particle collisions that take place each second at RHIC, along with petabytes of data generated by the LHC’s ATLAS experiment—storing, processing, and distributing that data to and running analysis jobs for collaborators around the nation and the world. The success of this distributed approach to data-intensive computing, combined with new approaches for handling data-rich simulations, has helped establish the U.S. as a leader in high-capacity computing, thereby enhancing international competitiveness.
 

  1. How Data Becomes Physics: Inside the RACF

    Thursday, March 3, 2016

    The RHIC & ATLAS Computing Facility (RACF) at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory sits at the center of a global computing network. It connects more than 2,500 researchers around the world with the data generated by millions of particle collisions taking place each second at Brookhaven Lab's Relativistic Heavy Ion Collider (RHIC, a DOE Office of Science User Facility for nuclear physics research), and the ATLAS experiment at the Large Hadron Collider in Europe. Watch this video to learn how the people and computing resources of the RACF serve these scientists to turn petabytes of raw data into physics discoveries.

A History of High Performance Computing

However, Brookhaven Lab also has a more recent history in operating  leading high performance computing clusters, specifically designed for numerically intensive computing applications, as they are often required by the theory programs accompanying large scale experimental or observational facilities.

In 2007 Brookhaven Lab acquired New York Blue/L, an 18 rack IBM Blue Gene/L massively parallel supercomputer. It was the centerpiece of the New York Center for Computational Sciences (NYCCS), a cooperative effort between BNL and Stony Brook University that involved universities throughout the state of New York. In addition to New York Blue/L, BNL also acquired in 2009 New York Blue/P which consisted of two racks of the Blue Gene/P series.  New York Blue/L debuted at number 5 on the June 2007 Top 500 list of the world's fastest computers. New York Blue/P debuted on the June 2009 Top 500 list at number 250. New York Blue/L was decommissioned in January, 2014 and New York Blue/P was decommissioned in October, 2015. New York Blue/L and New York Blue/P allowed computations critical for research in biology, medicine, material science, nanoscience, renewable energy, climate science, finance and technology. Today BNL operates a Blue Gene Q as part of three facilities/collaborations, NYCCS, RIKEN, and LQCD. The system was acquired in the fall of 2011. One rack of the Blue Gene Q system was benchmarked for data intensive applications and debuted at number 6 on June 2012 Graph 500 List.

The new BNL Scientific Data and Computing Center combines the joint expertise in high through put, high performance and data intensive computing, data management and preservation into one computing facility. The Center offers service to local and national clients that require high performance, highly available computing services with an emphasize on data intensive applications.