September 06, 2012
Last week, the Distributed Management Task Force (DMTF) announced a specification named Cloud Infrastructure Management Interface, or CIMI. The new spec aims to make interactions between IaaS providers and end users a little easier by providing an accepted set of rules that they hope will get adopted across a range of cloud vendors.
The DMTF collaborates with a number of international organizations including the Metro Ethernet Forum (MEF), Open Data Center Alliance (ODCA), International Committee for Information Technology Standards (INCITS), National Institute of Standards and Technology (NIST) and China Communications Standards Association (CCSA). In working with these groups, the task force wants to enable end users to have a familiar set of infrastructure management tools, regardless of their cloud provider.
Winston Bumpus, chairman of DMTF shared with HPC in the Cloud that the purpose behind this spec was to build an infrastructure management standard with wide industry support.
DMTF receives input, participation and contributions from a number of companies/vendors in the industry to ensure interoperability. DMTF also partners with a number of industry alliances in an effort to unify cloud management initiatives and to promote interoperability.
The CIMI v1.0 Spec consists of two essential elements:
CIMI also supports the DMTFs Open Virtualization Format (OVF), which is an import/export standard adopted by ANSI and ISO.
The main feature of the spec so far is the REST protocol. This enables users to interact with infrastructures using requests over HTTP. For example, various keywords like PUT, GET, DELETE, HEAD or POST can be used to process Create, Read, Update and Delete (CRUD) operations.
Bumpus also mentioned that the standard is vendor neutral and he expects that CIMI will receive adoption over a variety of cloud providers. When asked how the spec would work for bare metal environments, the chairman explained that it would be possible to implement CIMI in those situations as well, although the spec was designed for use in virtualized infrastructures. Whether services like Penguin or Zunicore choose to adopt the standard is yet to be seen.
Given that lock-in is a concern with potential and existing cloud users, any specification that receives wide adoption across providers will likely come as welcome news. If CIMI is a success, it might pave the way for future platform standards.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.