March 07, 2011
One year ago, SGI announced its SGI Cyclone for large-scale, on-demand cloud computing services specifically dedicated to technical applications. Around this first anniversary, it seemed like the perfect time to get an update from someone who is deeply involved with Cyclone, Christian Tanasescu. As Vice President Software Engineering at SGI. Christian, among others, leads SGI’s activities around Cyclone. He was an easy catch for me because we know each other well from the good old Fortran 90 days!
Just for some background, in his VP of Software Engineering role, Christian is responsible for system software and middleware development, applications, ISV relationships and leadership of all activities around Cyclone, one of the first cloud offerings on the market dedicated to HPC applications. Since joining SGI in 1992, Christian has held a number of management positions in HPC system engineering, strategic partner management, performance modeling and application enablement. As initiator of the Top20Auto study to analyze development of HPC platforms and applications in the automotive industry, Christian has extensive knowledge of the manufacturing vertical. Prior to SGI, Christian worked at Fujitsu-Siemens in compiler development and served on the Fortran 90 standardization committee. Christian holds a master's degree in Computer Science from Polytechnic University of Bucharest.
Wolfgang: Christian, let’s start with the state of affairs at SGI and its current focus in the marketplace.
Christian: SGI is focused on the technical computing market, which addresses the ‘big data’ needs of both mission critical technical and business applications. Technical computing problems in science, engineering and business are addressed by compute-intensive and data-intensive applications. Compute-intensive workloads are model-based computations, where every single data element is important, the basic method is hypothesis testing, the model can be deconstructed, and runs well on clusters. Data-intensive workloads tend to be model-free computations, difficult or impossible to deconstruct, the basic method of which is pattern discovery, and run well in shared memory. Our goal is to accelerate time to results for customers in our target markets, which include: Internet and Cloud, Government, Research and Education, Manufacturing, Energy, and Financial Services.
Wolfgang: Please, tell us about SGI Cyclone.
Christian: SGI Cyclone cloud computing service is one of the first specifically dedicated to scientific and engineering applications. When SGI began to design Cyclone we were razor focused on offering our customers the applications that they are currently using to create their products or do cutting edge research - in other words, applications that “are their business,” as opposed to applications that “support their business.” Email, CRM and HR programs are important support functions for any company, but technical applications are those that result in the design of a better, safer and quieter car or airplane, help discover new drugs or new oil reserves, or offer better forecasts of the weather, to name just a few examples.
Through Cyclone, SGI offers its performance-optimized software stack and hardware together with key technical computing applications from its partners or open source in the domains of Computational Biology, Computational Chemistry and Materials, Computational Fluid Dynamics, Finite Element Analysis, Computational Electromagnetics and Data Analytics.
A prominent feature in Cyclone is the flexibility of choices, because technical workloads have very different computational requirements. On Cyclone, customers have a choice of platforms (scale-up or scale-out), accelerators (NVIDIA Fermi, ATI FireStream and Tilera), operating systems (SUSE, RHEL, CentOS, Windows), interconnects (NUMAlink, InfiniBand, GigaEthernet) or topologies (hypercube, all-to-all, fat-tree, single or dual rail).
Wolfgang: Many clouds offer Software as a Service (SaaS) and Infrastructure as a Service (Iaas). You are now offering a new kind of service model - Expertise as a Service (EaaS). Why do you think there is a need for this new service model?
Christian: EaaS is the consultative component of our HPC Cloud that brings real value to our computational science and engineering customers. We currently offer over 20 technical applications in the six HPC domains mentioned above. When we asked one of our primary ISV partners if they would work with a service like Amazon EC2's new HPC service, they declined because they don't have the in-house expertise to help their customers. SGI Cyclone can offer their software because we have a team of technical application engineering experts who for many years have been supporting the optimization and benchmarking of their software on our hardware systems. So it is logical by extension that we now offer our customers this Expertise as a Service (EaaS) model.
The other rationale for this EaaS offering is to enable a wider adoption of HPC in smaller and medium size companies. Analysts estimate that 100,000 companies exist in the manufacturing sector in the US that could use simulation technology as an intellectual amplifier in advancing their product designs, but cannot afford to do so because of the lack of (a) funds to acquire a cluster, (b) funds to buy the necessary software licenses, (c) IT expertise to operate the cluster, or (d) application expertise with commercial applications. Cyclone, with our extensive server infrastructure, software, and expertise as a service enables these companies to now get easy access to HPC platforms and applications.
Wolfgang: How does EaaS work?
Christian: Let’s take for example a small or medium size business that is running a package like LS-DYNA on workstations to perform structural analysis of a new product they are designing. They are under a tight deadline to deliver their results and they need to greatly accelerate the turnaround time on their job runs. They could look into purchasing a cluster, but capital budgets are tight and they don’t have the IT expertise or bandwidth to set up and run a cluster. At some point they consider SGI Cyclone, where they can talk with an LS-DYNA applications expert, who not only helps them determine the size of the system they will need to quickly run their jobs, but they will also walk them through the process and, if requested, load and launch their jobs for them. A set of simulations that takes three weeks on a quad core workstation might only take 12 hours on a 256 core SGI Altix ICE 8400 Infiniband cluster. The customer gets the results they need quickly and efficiently, without having to go through the hassle of buying hardware or the added expense of extra yearly software licenses. They only pay for what they used to run their simulations.
Wolfgang: What are some of the challenges that need to be addressed to run HPC workloads in the cloud?
Christian: On the hardware side, most cloud vendors offer virtualized instances on servers with limited scalability, memory allocation and lack of user control over node interconnect topology, which leads to unacceptable MPI latency while running many technical applications. SGI Cyclone squarely addresses these issues. We use virtualization technology only in the login/management node layer of the platform. We then provide bare metal access (I call it ‘physicalization’) to run their applications on our scale-out clusters, scale-up shared memory systems, or our hybrid clusters with accelerators.
On the software side, we have also found that many 3rd party commercial ISV’s fear that providing their software via the cloud will crater their existing annual licensing revenue. We have been encouraging our software partners to experiment with this new business model by working with existing customers who already own annual licenses and provide them with easy access to purchase additional licenses via Cyclone or directly from the ISV. Some ISVs get it, while others are taking a wait and see stance.
Wolfgang: Similar to ISVs, aren’t you concerned that selling compute cycles in the cloud will erode SGI product revenue?
Christian: This is not what we have observed to date. For us, it is about customer choice. We ask our customers the following three simple questions: “What problem are you trying to solve, how much equipment do you feel you need today and will need in the future, and when do you need it?” We have a robust ‘build-to-order’ data center and modular data center business, and are participating in the current tech refresh cycle that is happening at large Internet companies and within financial services and virtualized cloud IT centers.
We are selling our new shared memory Altix UV and the latest version of our SGI Altix ICE scale-out clusters to our government, research and education, manufacturing, and energy customers. With Cyclone we complete the need. If our customers need a bridge to keep working as they wait to receive their newly acquired SGI platform they can use Cyclone. If they need to test new SGI technology before buying they can use Cyclone. If they need to combine on-site compute resources sized for an average workload with cloud-bursting capability, they can use Cyclone. We help them achieve these goals. And finally, if a new customer has a tight capital budget and they don’t have the IT expertise to set up and run a cluster, we can help them with our Cyclone Expertise as a Service (EaaS).
Wolfgang: What do you see the role of cloud computing in the future IT infrastructure?
Christian: Cloud computing is morphing the client-server model. On the server side, the path is going from supercomputers to datacenters to co-located datacenters to the ubiquitous use of the cloud. On the client side, the world is moving from the workstation/PC to netbooks, tablets and location-aware smart phones. I think this trend leads to fewer, very large data centers with their own co-located power plants providing cloud access to millions and eventually billions of mobile clients. The new smart phones coming onto the market will have enough compute capabilities to replace the business notebook and will plug into a docking station to perform basic office work using cloud-based applications and storage.
Of course, there are challenges that need to be addressed, the most important being inexpensive sustainable power for these mega datacenters, as well as the access to and protecting the security of mobile business data.
Dr. Wolfgang Gentzsch is the General Chair for ISC Cloud'11, http://www.isc-events.com/cloud11/
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.