December 12, 2005
Xen 3.0 supports Intel Virtualization Technology, which allows virtualized servers to run natively on the processor, exploiting hardware acceleration for CPU and memory virtualization. This support is key to Xen's ability to virtualize all operating systems. Xen will also support AMD's Pacifica hardware virtualization early in 2006.
Xen 3.0 also supports up to 32-way SMP virtualized guests, with an ability to dynamically "hot plug" CPUs to ensure best use of available resources. Used in conjunction with Xen's ability to dynamically relocate a running guest from one server to another, this capability enables IT managers to optimally place workload on their available server resources. Additionally, Xen 3.0 offers support for two new addressing modes for servers with large memories: Physical Address Extension (PAE) allows 32-bit servers to address more than 4GB memory, and 64-bit addressing for up to 1TB of memory; and, support for Trusted Platform Modules, which provide hardware based security, attestation and trust, as well as security features contributed from IBM's secure hypervisor initiative. A port of Xen, to Intel's Itanium Architecture contributed by HP and Intel is also included, and a port of Xen to IBM's Power PC architecture by IBM is close to completion, signaling broad cross-platform adoption of Xen.
"This release represents a significant milestone for the Xen community," said Ian Pratt, Xen project leader and XenSource founder. "It is the result of a tremendous community effort, with contributions to date from over 150 developers world wide, and more than 20 major enterprise infrastructure vendors, as well as the OSDL and 10 top tier universities. The fact that Xen is the industry's fastest and most secure hypervisor for x86 systems is testimony to the depth of our community and power of the open source process. The industry has embraced Xen as an emerging open industry standard for virtualization, for all operating systems."
The community release signals that the code base is functionally complete and ready for further testing and validation by the Xen community. "We have been working hard to make Xen 3.0 available to the major Enterprise Linux vendors so they can begin QA for their next major releases, and to deliver a hypervisor that can exploit hardware virtualization and thus support proprietary operating systems," said Pratt. "Now we are turning our focus to extending our community testing program, hardening, and performance-tuning."
"As Linux gains ground in enterprise data centers, virtualization has become a key requirement," said Stuart Cohen, CEO of the Open Source Development Labs (OSDL), a global consortium dedicated to advancing Linux adoption in the enterprise. "Xen has established itself as a proven open source standard for cross operating system, cross platform virtualization and plays a key role in the increasing success of Linux in the enterprise."
"Intel Virtualization Technology coupled with Xen 3.0 will deliver the benefits of Intel's market leading hardware virtualization to both client and server markets from the get-go," said Doug Fisher, general manager of Core Software Division at Intel.
Xen 3.0 is the first major release of Xen since the October 2004 release of Xen 2.0, which saw significant deployment in ASP, retail, hosting and development and testing environments. The new release delivers a feature set needed by large enterprises seeking to adopt virtualization in the data center, to realize the benefits of increased server utilization, server consolidation, "instant on" provisioning of servers and no-downtime maintenance. Virtualization of enterprise servers cuts capital expenditures and personnel costs associated with deployment and management of IT infrastructure. Xen's ability to instantly deploy a virtual server image on any server dramatically cuts provisioning time -- from weeks to seconds -- and its live relocation capability enables no-downtime maintenance, high availability and optimal matching of workload to available compute resources.
This release represents the first public availability of the Xen 3.0 code base and feature set to the broader open source Xen community allowing Xen partners and vendors to now begin to perform performance testing and quality assurance, stabilization, and development of their Xen-based offerings. Xen 3.0 will be distributed by the leading enterprise Linux distributors, in Novell's SUSE Linux Enterprise Server, and Red Hat Enterprise Linux. Sun also recently announced plans to offer paravirtualized Solaris on x64 virtual servers running on Xen."Red Hat recently announced that it will integrate and support Xen 3.0 virtualization in the upcoming Red Hat Enterprise Linux release, which is expected to ship by the end of 2006," said Brian Stevens, CTO of Red Hat Inc. "Prior to that, Xen will be available in Fedora Core 5, and we are working closely with the XenSource team to ensure a smooth inclusion in the Red Hat release process. Virtualization should be available on every server, and Xen-based virtualization offers a high performance solution that will allow users to dynamically provision new instances of Linux servers across a Grid, or within a single OS instance. Our goal is to be pervasive and disruptive and we don't want to treat Xen as niche." Red Hat has been a significant contributor to Xen 3.0, and recently announced that they would assist XenSource with upstreaming the Xen paravirtualization patches into kernel.org.
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.