January 29, 2013
PORTLAND, Ore., Jan. 29 – The Distributed Management Task Force (DMTF), the organization bringing the IT industry together to collaborate on systems management standards development, validation, promotion and adoption, today announced the release of version 2.0 of its Open Virtualization Format (OVF) standard. The new version applies to emerging cloud use cases and provides important enhancements from OVF 1.0, including improved network configuration support and package encryption capabilities to ensure safe delivery. The updated version of the standard demonstrates DMTF’s continued mission to enable management interoperability in virtual and cloud environments.
“The OVF standard is the foundation of DMTF’s virtualization management and cloud standards development efforts,” said Jeff Hilland, President, DMTF. “Version 2.0 builds upon the success of the standard and brings an enhanced set of capabilities to the packaging of virtual machines – broadening the use of the standard for emerging cloud use cases.”
The standard, originally released in 2009, has provided the industry with a standard packaging format for virtual machines – solving a critical business need for software vendors and cloud service providers. It achieved international recognition as an international standard by ISO/IEC in 2011.
OVF has also been one of the most popular and widely adopted standards in the IaaS space. As cloud computing continues to gain traction in the industry, the updated standard will provide improved capabilities for virtualization, physical computers and cloud use cases – benefitting both end users and cloud service providers.
New improvements to OVF version 2.0 include:
DMTF enables more effective management of millions of IT systems worldwide by bringing the IT industry together to collaborate on the development, validation and promotion of systems management standards. The group spans the industry with 160 member companies and organizations, and more than 4,000 active participants crossing 43 countries. The DMTF board of directors is led by 17 innovative, industry-leading technology companies. They include Advanced Micro Devices (AMD); Broadcom Corporation; CA Technologies.; Cisco; Citrix Systems, Inc.; EMC; Fujitsu; HP; Huawei; IBM; Intel Corporation; Microsoft Corporation; NetApp; Oracle; Red Hat; SunGard Availability Services and VMware, Inc. With this deep and broad reach, DMTF creates standards that enable interoperable IT management. DMTF management standards are critical to enabling management interoperability among multi-vendor systems, tools and solutions within the enterprise.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.