December 10, 2007
SAN JOSE, Calif., Dec. 3 -- Xsigo Systems Inc., the technology leader in datacenter I/O virtualization, today announced the results of a recent survey that sought to better understand server input/output (I/O) requirements when using server virtualization in today's datacenters. Responses from more than 100 IT staff members at Fortune 5000 companies using server virtualization expressed concern for the server connectivity challenges they currently face.
This survey revealed that IT managers encounter significant cost and cabling issues when configuring connectivity on servers running virtualization software. Compared with traditional servers, virtualized servers are being configured with more connections, and those configurations are being changed more frequently -- two factors that significantly drive up costs. Growing at nearly 41 percent per year, server shipments in support of virtualization are expected to reach 1.7 million units annually by the year 2010, according to IDC research.
Current I/O infrastructure in the datacenter was designed for traditional server usage, not virtualized server implementations that are currently on the rise. Because users often prefer dedicated connectivity for individual virtual machines, servers frequently require additional I/O. A simple problem, like having a server with six I/O ports when seven is needed in order to accommodate virtualization capabilities, can add significant capital and labor expenses to a datacenter.
The Xsigo survey's most significant finding is that server virtualization significantly increases connectivity requirements: 75 percent of virtualization users configure seven or more I/O connections per server, compared to two to four connections for a server running without virtualization software. Because virtualized servers run more applications and operate at higher levels of utilization than conventional servers, they are more likely to encounter I/O bottlenecks. As a result, there is an increased need for more I/O connections, to the extent that configuring server I/O for virtualized servers frequently exceeds the cost of the server itself.
Other key findings from the virtualization survey revealed:
As cited by users in the survey, these findings indicate there is a direct impact on costs and management processes for the following:
"Users often overlook the demands that server virtualization puts on their I/O infrastructure," said John Humphreys, vice president of enterprise virtualization at IDC. "We see this as a growing problem, and one we encourage IT groups to consider as they move forward with virtual server deployments. Additionally, we have found that connectivity challenges stemming from live migration can be reconciled through I/O virtualization, technologies delivered by companies like Xsigo."
"We know now that virtualization users must add additional connectivity, but the question is whether the IT staff has the resources and tools to address I/O connectivity effectively and efficiently before applications experience a major impact," said Jon Toor, vice president of marketing for Xsigo Systems. "With virtual I/O, resources can be allocated in real time, without re-configuring cards or cabling."
Xsigo Systems launched in September of 2007 specifically to address the I/O connection bottlenecks currently faced in the datacenter. It is estimated that large datacenters can lower their server-related operational expenses by up to 80 percent, cut capital costs by 50 percent, and use 70 percent less cabling by using Xsigo's technology. With Xsigo's VP 780 I/O Director, IT managers can provision I/O resources on-the-fly without disrupting network and storage configurations, and without physically entering the datacenter. Xsigo consolidates the I/O infrastructure and replaces physical network and storage interfaces (NICs and HBAs) with virtual resources that are remotely manageable from a single console. Unlike alternative approaches, the Xsigo I/O Director offers reduced capital costs, greater management simplicity, and support for open standards.
Xsigo Systems Inc. is the technology leader in datacenter I/O Virtualization, helping organizations reduce costs and improve business agility. The Xsigo VP780 I/O Director consolidates server connectivity with a solution that provides unprecedented management simplicity and interoperability with open standards. The privately held company is based in San Jose, Calif., and funded by Kleiner Perkins, Khosla Ventures and Greylock Partners. For more information, visit www.xsigo.com.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.