March 23, 2011
SANTA CLARA, Calif., March 23, 2011 -- Citrix Systems, Inc. today announced that IBM has certified Citrix XenServer on System x and BladeCenter Servers to make deployment faster and easier for customers. The rigorously pre-tested configurations enable customers to more easily leverage Citrix XenServer and IBM servers to automate datacenter management processes and increase efficiency of datacenter infrastructures.
Today’s announcement expands the growing market momentum for XenServer in both cloud and enterprise datacenters. More than 50,000 enterprises worldwide now deploy XenServer for server virtualization, including 50 percent of the Fortune 500. XenServer also continues to gain share among cloud providers, building on the presence of the open source Xen hypervisor, the most widely deployed virtualization platform in the cloud. It is also the most widely used hypervisor for virtual desktops, hosting an estimated 2.5 million VDI-based desktops.
IBM has a proven track record of customer success with Citrix-based solutions running on IBM System x and BladeCenter servers. For customers deploying the pre-qualified configurations, IBM provides server hardware warranty support, while Citrix offers procurement and software support. To access the pre-qualified configurations, and find more details on the tests conducted, please visit the IBM and XenServer page. In addition, the IBM System x and BladeCenter servers are featured on the Citrix Hardware Compatibility List.
Citrix Systems, Inc. (NASDAQ:CTXS) is a leading provider of virtual computing solutions that help companies deliver IT as an on-demand service. Founded in 1989, Citrix combines virtualization, networking, and cloud computing technologies into a full portfolio of products that enable virtual workstyles for users and virtual datacenters for IT. More than 230,000 organizations worldwide rely on Citrix to help them build simpler and more cost-effective IT environments. Citrix partners with over 10,000 companies in more than 100 countries. Annual revenue in 2010 was $1.87 billion.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.