June 18, 2012
A benchmark to measure cloud computing performance is in the works. The Standard Performance Evaluation Corporation (SPEC) has created a new working group tasked with developing a set of metrics for cloud services. Computerworld reported on the story this past Friday.
This week, the International Supercomputing Conference (ISC'12) will unveil the 500 fastest computers on the planet. The systems are benchmarked by the number of floating point operations they can retire within a given second. Unlike HPC systems, cloud services lack an industry-wide method to gauge definitive performance. To alleviate the issue, SPEC's Open Systems Group (OSG) has formed a new subcommittee, OSGCloud, comprised of participants from AMD, Dell, IBM, Ideas International, Intel, Karlshuhe Institute of Technology, Oracle, Red Hat and VMware, and accomplished SPEC benchmark developer Yun Chao. OSGCloud will collaborate with other SPEC groups "to define cloud benchmark methodologies, determine and recommend application workloads, identify cloud metrics for existing SPEC benchmarks, and develop new cloud benchmarks."
The group has their work cut out for them, as resources are highly mutable in cloud environments. "…you don't just have a single-sized computer most of the time. You are allowed to vary to your needs," said Rema Hariharan, chair of OSGCloud to Computerworld. Some performance measurements may find their roots in existing tests like throughput, but others will have to be created from scratch. For example, elasticity might play a role in the new test. In this case, providers would be vetted based on how quickly they can adapt services to meet customer needs.
While final decisions have yet to be made, SPEC plans to release the new benchmark with a focus on three main groups. These include hardware and software vendors that enable cloud services, service providers (IaaS, PaaS, SaaS), and enterprise users looking to compare providers.
One concept under discussion splits up services according to the details they give regarding hardware. Black box services may be judged without delivering information about platform software or server components. On the other hand, white box providers would disclose equipment specs like processors, GPUs, storage, interconnects and so on.
OSGCloud has considered a number of benchmarks and measurement tools in a 50-page report on the subject.
"We have gotten off to a great start in creating guidelines for benchmarks with clearly defined, standardized metrics, but we would like to see wider participation, especially from cloud providers and users," Hariharan disclosed in the group's launch statement.
Ultimately, a definitive set of measurements could prove helpful for both cloud users and providers. Users would have the benefit of side-by-side comparisons based on the needs of their workloads, while service providers could use the benchmarks to differentiate their offering from competitors.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.