High-Performance Parallel Computing
Many researchers in the Department of Physiology and Biophysics and the Institute for Computational Biomedicine use computer models to describe and explore the properties of biological systems. These ever more detailed models push the boundaries of even the most powerful computer hardware and software available. While many of these models are parallelizable, different models and computational algorithms have different performance properties when run in parallel. For example, a molecular dynamics (MD) simulation of a large biological macromolecule performs best when run on a Symmetric Multiprocessor (SMP) system, because the different processors must have fast access to a large, common memory pool. Other simulation techniques, such as Monte Carlo methods, are more amenable to be run on clusters of independent computers. By tailoring the machine to the job at hand, researchers can be sure they are getting the most out of available computational resources, and are modeling biological systems of interest to highest degree of detail and fidelity possible.
With the steady increase in computer processor performance promised by Moore's law essentially a reality, the performance bottlenecks for scientific applications today are often not due to processor speeds, but due to the time it takes to read and write ever larger datasets to and from disk storage. Employing clustered file systems, whereby all computers on a storage area network (SAN) have direct fiber-optic based access to all data on shared filesystems, yields a significant performance and manageability benefit. This technology, first exploited primarily by the animated motion picture industry in large 'render farms', is now being employed in high-performance scientific computing installations as well.
With researchers generating and analyzing ever-larger datasets more and more quickly, advanced visualization techniques are needed to make sense of the data. To ensure that the Weill Cornell Medical College stays on the forefront of advanced scientific visualization techniques, we have committed to building a fully immersive 3D environment (CAVE).
The key advantage of the CAVE is it "immersive" visualization capabilities. While we already can look at 3D renderings of structures and abstract data sets on a 'regular' high-end graphics workstation, a CAVE allows:
- the user(s) to exist within the data set or structure; seeing it from multiple perspectives simply by moving around in the space.
- manipulation of the data and models being explored in a much more natural way than is facilitated by a keyboard and mouse. This manipulation can often involve tactile feedback through the use of haptic devices (e.g. gloves).
- collaboration - several researchers can enter the same space at the same time and manipulate the environment.