March 27, 2012
Version two simplifies the cloud service consumption process and introduces end-to-end automation from VM image building to cloud provisioning contextualization
PARIS, March 27 — OW2, the open source infrastructure software community, announces that the CompatibleOne collaborative project has delivered version two of its open source cloud broker. Version two includes a host of cloud-oriented technology innovations including the automation of VM image building and of service provisioning contextualization. CompatibleOne v2 makes it easy for everyone to request cloud services.
With version one, DevOps could use manifests to describe services to be deployed on virtual machines (VMs) that had been previously made available by cloud service providers. Version two adds two key features to the CompatibleOne platform: first it automates cloud service provisioning and second it makes it independent from whether adequate VMs had been deployed or not. In a nutshell, CompatibleOne v2 offers a way to build service-specific VMs on the fly so that they can be automatically deployed by cloud service providers. As a result, it considerably simplifies the cloud service consumption process by enabling users to express their needs by reusing high-level services with pre-defined manifests made available as service catalogs.. For example, CompatibleOne provides a demonstration where a user simply requests a configuration comprising two VMs, one with an XWiki application server and the other one with a MySQL database.
Here is how CompatibleOne version two works. A user produces a manifest that describes their precise needs for service, for example "instantiate a MySQL VM", the required infrastructure and the instructions for deployment of this service as a VM. CompatibleOne checks if the required VM image exists already in its repository. If the image is not found then it will be built and the description will be stored in the repository and made available for use in subsequent requests. Then, the CompatibleOne broker provisions and assembles the resources as described in the repository, and, making use of a specific module for the management of service metadata, such as IP addresses and user credentials, performs configuration and personalization of the provisioned service.
"This important milestone is the result of major developments in the metadata management system, the image production system, the security services and the monitoring," says CompatibleOne principal architect Jamie Marshall, "as you can imagine it really is a huge collective effort," he adds.
CompatibleOne v2 demonstrations are showcased on the OW2 booth (#D24) at Cloud Computing World Expo in Paris, March 28-29.
The CompatibleOne collaborative project develops the first industry-grade open source cloud broker. CompatibleOne was launched as a collaborative project to come up with ideas addressing the need for interoperability in the field of Cloud Computing. The project quickly evolved until it converged in developing a cloud computing broker. CompatibleOne is an open source collaborative project supported by 14 partners. Its technology is based on open standards and its approach fully leverages OCCI, the open cloud computing interface. CompatibleOne has defined a four-step functional manifest-to-service provisioning cycle of the CompatibleOne broker. The CompatibleOne platform is aligned with the Cloud Computing Reference Architecture2 of the National Institute of Standards and Technology (NIST).
OW2 is an independent industry community dedicated to developing open source code infrastructure (middleware and generic applications) and to fostering a vibrant community and business ecosystem. The OW2 Consortium hosts some one hundred technology projects, including ASM, Bonita, eXo Platorm, JOnAS, JORAM, Orbeon Forms, Orchestra, Spagic, SpagoBI and XWiki. OW2 is an open source dissemination partner in a number of collaborative projects, such as CHOReOS, CompatibleOne, OpenCloudware and XLcloud. Visit http://www.ow2.org.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.