Traditionally High Performance Computing (HPC) clusters have been purpose built systems that are statically defined and deployed. With the Cloud we can move systems via software, where users can dynamically customize the system architecture specific to their workflow.
While still in its early stages, HPC and the Cloud is best described as “All Terrain Computing” due to its’ flexibility in tackling a variety of tasks. As an example, Argonne National Laboratory (ANL) is building an experimental cloud architecture running on a custom OpenStack implementation.
OpenStack, which began in 2010, as a joint project of NASA and RackSpace, is a cloud operating system that controls large pools of compute, storage, and networking resources. The infrastructure can be controlled many software tools whether they are web-based, command-line or of the RESTful API variety.
The ANL OpenStack system tackles a number of problem over many fields such as Bioinformatics, X-ray Photon Correlation Spectroscopy, and Cosmology. All of these projects place unique workloads requirements on the HPC/Cloud system.
Ryan Aydelott began his career in data center networking supporting the Internet Server Provider (ISP) industry throughout the 90’s. A move into telecom space led to work with experimental wireless systems. He then shifted to the startup culture where he built the infrastructure to support many rapidly growing companies. Currently his work involves data focused cloud computing environment and supporting scientific workflows at ANL