This web site was copied prior to January 20, 2005. It is now a Federal record managed by the National Archives and Records Administration. External links, forms, and search boxes may not function within this collection. Learn more.   [hide]
Skip To Content
NSF Logo Search GraphicGuide To Programs GraphicImage Library GraphicSite Map GraphicHelp GraphicPrivacy Policy Graphic
OLPA Header Graphic
 
     
 

News Tip

 


March 16, 2004

For more information on these science news and feature story tips, contact David Hart, (703) 292-7737, dhart@nsf.gov

Virtual Screening Lab Zeroes in on New Drugs

Researchers at Rensselaer Polytechnic Institute (RPI) have come up with computational tools that serve as a virtual screening lab to help chemists weed through millions of possible drug candidates even before they dirty their first test tube.

Chemist Curt Breneman, mathematician Kristin Bennett, and computer scientist Mark Embrechts developed faster and more accurate techniques for describing molecules and combined them with next-generation neural networks and learning methods as part of the Drug Discovery and Semi-Supervised Learning (DDASSL) project.

Funded by a $1.2 million National Science Foundation Knowledge and Distributed Intelligence award, the DDASSL (pronounced "dazzle") project has spawned a number of descendants. Today, 10 research projects on the RPI campus, ranging from the life sciences to materials science to cybersecurity, can trace their origins in part to DDASSL (http://www.drugmining.com/).

In addition, Concurrent Pharmaceuticals, based in suburban Philadelphia, is evaluating the DDASSL techniques in a real-world environment. According to Jean-Pierre Wery, Concurrent's vice president of computational drug discovery, the information used by DDASSL is different from what has been used traditionally. "These tools are consistent with Concurrent's efforts to change the way we think about the drug discovery process," Wery said.

When starting to develop a new drug to attack a particular biological target, a pharmaceutical chemist is confronted with the accumulated knowledge stored in vast public and corporate databases on tens of millions of potential drug molecules and their effects.

DDASSL techniques and a relatively inexpensive Linux cluster provide a fast and accurate tool for pinpointing the likeliest candidates from these databases. DDASSL can screen 10 million molecules per day for their potential drug interaction with a model of the biological target molecule.

By comparison, the best virtual screening techniques prior to DDASSL would use less accurate molecule descriptors and still examine less than a million molecules in a day. Actual laboratory tests top out at several hundred or thousand per day even with the latest high-throughput equipment.

"DDASSL tools are also an easy way to take an idea and run it past a model," Breneman said. "Chemists can test their 'wild ideas' quickly and without the expense of a lab test."

NSF Program Officer: Maria Zemankova, (703) 292-8918, mzemanko@nsf.gov

Principal Investigators:
Curt Breneman, brenec@rpi.edu, (518) 276-2678
Kristin Bennet, bennek@rpi.edu, (518) 276-6899
Mark Embrechts, embrem@rpi.edu, (518) 276-4009

a small molecule calculated using accurate ab initio techniques

techniques developed in the DDASSL project
This pair of images shows the surface electrostatic potential of a small molecule (alanine didpeptide) calculated using accurate ab initio techniques (top) and techniques developed in the DDASSL project (bottom). While producing similar results, DDASSL computations are at least a thousand times faster. Therefore, in the same amount of time as current computational drug-screening methods, DDASSL techniques can include additional chemical information and much more accurate descriptors, leading to better and more reliable predictions.
Credit: Curt Breneman, RPI

 Note About Images

Top of Page

NSF Digital Libraries Part of New Yahoo! Search Effort

As part of a new effort by Yahoo! to expand the breadth and depth of the Web content it searches, several digital libraries supported by the National Science Foundation (NSF) will make their collections of Supreme Court audio, Babylonian artifacts and science education resources accessible through Yahoo!'s enhanced search capabilities.

"NSF supports many innovative digital library collections, and we are pleased to have these unique national and international resources included in Yahoo!'s effort to provide the best and most relevant Web content to its users," said Steve Griffin, program director for digital library activities in NSF's Computer and Information Sciences and Engineering directorate.

Three NSF-supported digital libraries are among the first non-commercial partners in Yahoo!'s new program designed to increase comprehensiveness, maintain the most up-to-date data and improve relevancy of search results for its users.

The NSF's National Science Digital Library (http://www.nsdl.org/) integrates more than 250 merit-reviewed resource collections, organized in support of science, technology, engineering and mathematics education at all levels.

Northwestern University's online OYEZ project (http://www.oyez.org/) contains more than 2,000 hours of Supreme Court audio, including all audio recorded since 1995. NSF support will allow the OYEZ archive eventually to provide public access to all Supreme Court audio—more than 6,000 hours.

UCLA's Cuneiform Digital Library Initiative (http://cdli.ucla.edu/), which has been supported by NSF, is pursuing the systematic digital documentation and electronic publication of approximately 500,000 cuneiform tablets that document Babylonian history from its beginnings around 3500 B.C. until the time of Christ.

NSF Program Officers:
Stephen Griffin, (703) 292-8918, sgriffin@nsf.gov
Lee Zia, (703) 292-5140, lzia@nsf.gov

Top of Page

Teragrid's First Targets Include Galaxy Formation and Pollution Cleanup

The first computing resources of the National Science Foundation's (NSF) TeraGrid became fully available for scientific use in January, and some of the first applications will be tracking the formation of galaxies in the early universe and finding the most efficient and least expensive ways to clean up groundwater pollution.

Other early TeraGrid (http://www.teragrid.org/) users will study seismic events and analyze biomolecular dynamics on the Linux clusters at the National Center for Supercomputing Applications (NCSA) and the San Diego Supercomputer Center (SDSC). The two clusters together offer 4.5 teraflops (trillions of calculations per second) of computing power and access to more than 250 terabytes of disk storage. Allocations for use of these machines were awarded by the NSF's Partnerships for Advanced Computational Infrastructure (PACI) last October.

"We are pleased to see scientific research being conducted on the first production TeraGrid clusters," said Peter Freeman, head of NSF's Computer and Information Sciences and Engineering directorate. "Leading-edge supercomputing capabilities are essential to the emerging cyberinfrastructure, and the TeraGrid represents NSF's commitment to providing high-end, innovative resources."

NSF's TeraGrid is a multi-year effort to deploy the world's largest, most comprehensive distributed infrastructure of computation, information and instrumentation resources for scientific research. Hardware at sites across the country is connected by a 40-gigabit per second backplane—the fastest research network on the planet.

The TeraGrid sites include NCSA at the University of Illinois, Urbana-Champaign; SDSC at the University of California, San Diego; the Center for Advanced Computing Research (CACR) at Caltech; Argonne National Laboratory; and the Pittsburgh Supercomputing Center (PSC). In 2003, NSF made awards to extend the TeraGrid partnership to Indiana University, Oak Ridge National Laboratory, Purdue University and the Texas Advanced Computing Center at the University of Texas at Austin.

In December, NCSA and SDSC installed Linux clusters that will provide an additional 11 teraflops of computing power. The expanded clusters will enter production by June 2004, bringing the combined power of the completed TeraGrid systems to 20 teraflops, including the 6-teraflops, 3,000-processor Terascale Computing System at PSC.

Mercury, the first TeraGrid cluster
Mercury, the first TeraGrid cluster at NCSA, runs on Intel's Itanium architecture. The 512-processor cluster has a peak performance of 2.7 teraflops (trillions of calculations per second).
Credit: NCSA
Select image for larger version
(Size: 1,091KB) , or download a high-resolution TIFF version of image (17.5MB)

 Note About Images

Top of Page

 

 
 
     
 

 
National Science Foundation
Office of Legislative and Public Affairs
4201 Wilson Boulevard
Arlington, Virginia 22230, USA
Tel: 703-292-8070
FIRS: 800-877-8339 | TDD: 703-292-5090
 

NSF Logo Graphic