May 18, 2011
In the midst of the general excitement at this past year’s Supercomputing Conference in New Orleans, French high performance computing vendor Bull slipped in news about its HPC on demand service, eXtreme Factory. According to Pascal Barbolosi, the head of Extreme Computing at Bull, the on-demand service has taken off, with several million compute hours logged in the platform's first six months.
Unlike other more general purpose cloud or on-demand services, Bull’s solution is targeted at users with complex modeling and simulation needs. Many of the codes that are preconfigured include those used in manufacturing, film and engineering.
In an interview this week to check in on progress with the company’s HPC service, Barbolosi noted that unlike commercial clouds, their eXtreme Factory is addressing the requirements of HPC customers by providing on-demand remote compute facilities access with a preinstalled and configured environment where ISV applications and open source codes are installed and available.
In his view, public cloud resources designed in a more one-size-fits-all fashion cannot match the requirements of high performance computing user needs. Accordingly, the Bull HPC head explains that his company is opting to “position this HPC on demand service because HPC requirements make it rather different from commercial hyper-marketed clouds.”
Barbolosi told us this week that there were customers running applications on-demand with Bull before the actual launch of the HPC cloud. He pointing to a “well-known automotive manufacturer” that was using a few hundred cores of HPC compute servers via a high performance 100Mbit telecom line earlier in 2010.
He says that as time has progressed and this customer has upgraded, replaced and adapted the number and capabilities of the HPC bullx servers they use they were able to continue along without interruption of their CFD and crash-test applications. He points to this kind of flexibility as attractive to high performance computing customers, noting that the platform can be used in parallel with on-site resources.
Barbolosi identified another early adopter of the eXtreme Factory platform that used the service for a month sometime in 2010 before the official launch. In this case the customer used CD Adaptco’s STAR-CCM+ package with its cloud-friendly, portable ‘power on demand’ licensing mechanism. He said that in this case, depending on the project compute needs, the customer can use the same software and license on her own internal compute resources or on Bull’s. This worked out so well that he says they’ve signed on for fresh resources in 2011.
The eXtreme Factory is, not surprisingly, powered exclusively by their own range of servers. According to Barbolosi, “Most of the infrastructure is comprised of Bullx Bades (both CPU only B500 and mixed CPU/GPU B505) interconnected by an efficient QDR InfiniBand network, running bullx SuperComputer Suite and hosted in our data centers.”
Users access the services via a secure, SSL-certified portal to obtain all the necessary functionality for a complete HPC workflow, including organization, uploading input files and data management, publication of applications, submission and monitoring of jobs, and remote visualization and downloading of results.
As the initial release described, in addition to “many thousands” of Xeon processors the data centers are “equipped with a storage environment, with a distributed file system for maximum performance during the processing stages, as well as permanent storage facilities enabling the user, thanks to remote visualization, to enjoy all the convenience of being a local user while avoiding data transfer as far as possible. “
Outside of defending the obvious choice of their own hardware to tackle the challenge, he explained that their customers would not have been attracted to the service if they were using vanilla servers in a traditional cloud. As he put it, “Traditional clouds don’t offer efficient parallel compute capabilities; vanilla servers don’t offer the throughput that our customers expect.”
On that note, when asked about the way cloud hardware is being positioned as “cloud optimized” (and if they were making that claim) Barbolisi said that as far as Bull is concerned, there is no unique feature of cloud-driven servers that is different from HPC-optimized servers. In other words, as he put it, there is a strong commonality between both domains, including performance, density and low-consumption features.
Barbolosi says he expects there to be a rise in the overall market for cloud computing in the next decade. He says that many HPC usage models are well adapted to cloud as users require elasticity and the ability to easily ‘burst’ workloads. However, he notes, “there are some technical issues specific to HPC that need to be addressed, such as remote visualization of data (instead of transferring huge data sets back and forth) and the ability to flexibly manage resource allocation.”
He says that these roadblocks for HPC clouds have inspired a conservative approach in comparison to proven business computing. He says, “nevertheless we consider that cloud will still be an important part [of the market] and could easily exceed 25% to 30% of HPC spending.”
To close, we can take a step back in time to SC10 for this video interview with Pascal Barbolosi as he introduces Bull’s big news, which includes, among other announcements, the eXtreme Factory.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.